Introduction to Neural Networks
See References section at the end for details.
We also recommend the Neural Networks section of the 3Blue1Brown website.
Hide the code
# Required imports
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
# import string
import pickle
from sklearn.model_selection import train_test_split
from sklearn.datasets import make_circles
# from sklearn.feature_extraction.text import CountVectorizer
# from sklearn.naive_bayes import MultinomialNB
# from sklearn.metrics import classification_report
from sklearn.neural_network import MLPClassifier, MLPRegressor
from sklearn.inspection import DecisionBoundaryDisplay
from sklearn import datasets
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import cross_val_score
#import joblib
import warnings
Hide the code
try :
import networkx as nx
except ModuleNotFoundError :
print ("Networkx not installed. Please install it first (backup your environment first)" )
Introduction to Artificial Neural Networks
An artificial neuron is a computational unit that performs a quite simple mathematical operation; they are usually illustrated with diagrams such as the picture below (this one is Figure 13-2 from (Glassner 2021 ) ). The computation is represented as a graph where the operations proceed left to right:
The artificial neuron takes a series of numerical values as inputs (the leftmost arrows entering the graph). Let us represent these inputs as \(s_1, s_2, \ldots, s_k\) .
Each input is multiplied by a certain weight \(w_1, w_2, \ldots, w_k\) associated with the arrow corresponding to that input and the products are used to form a linear combination: \[w_1s_1 + w_2s_2 + \cdots + w_ks_k\] This step is indicated in the diagram by the plus sign in a circle.
A bias term \(w_0\) is added to the linear combination: \[w_0 + w_1s_1 + w_2s_2 + \cdots + w_ks_k\] To simplify the discussion and notation we will normally use the bias trick , adding an extra input with fixed value 1 so that the inputs are \(1, s_1, s_2, \ldots, s_k\) and then the bias can be included into the list of weights.
Finally, the linear combination goes into an activation function \(h\) , represented by the step function in the righmost node of the diagram. We will later discuss in detail what activation functions are used in practice, but the resulting value, the output of the artificial neuron will always be \[h(w_0 + w_1s_1 + w_2s_2 + \cdots + w_ks_k)\]
Artificial neurons were introduced in the 1940s (see (Fitch 1944 ) ) and captured the interest of the research community up to the late 1960s. They were conceived in an attempt to provide an abstract model of the operations performed by the neurons in the animal brains. Of course, a biological neuron is a far more sophisticated system with a complex regulation composed of chemical and electrical signals. To say that the artificial neuron is a simplistic model of the biological reality is a huge understatement. But the analogy sticked, and up to the present these computational units are still referred to as artificial neurons. You can learn more about the first years of artificial intelligence in this video and in the book (Hecht-Nielsen 1990 ) . See also the first pages of the lecture notes from past year and the references provided therein.
Let us think about the impact of the choice of activation function on the class of problems that can be addressed by an artificial neuron. If the activation function is the identity, meaning \[h(x) = x,\] then the artificial neuron is nothing but a linear regression model. If it is a sigmoidal function, as in \[h(x) = \frac{1}{1 + e^{-x}}\] then the artificial neuron model is simply a logistic regression model. In any case, for any choice of activation function, a single neuron model can only be used for linearly separable problems.
Things change when we consider not a single neuron, but networks of several, possibly many, interconnected neurons. This is the idea of the Artificial Neural Networks (ANNs) . These are models composed of artifical neurons as building blocks, whose architecture can be described by a (directed) graph of nodes and arrows that indicate how the computations (for training and prediction) are performed using the model. One such graph appears below (figure 13-7 in (Glassner 2021 ) )
As you can see, the output of any neuron in the network can be used as the input to one or several other neurons in the network, downstream in the direction indicated by the arrows connecting the neurons. Note also some simplifications that are common in this type of ANN diagrams: + We do not include the weights associated with each input. Every arrow entering a neuron is assumed to carry an implicit weight along with it. + The situation for the bias is even more extreme: the bias terms do not appear, not even as an arrow implying their presence. But they are there and you need to keep in mind that every neuron includes its own bias term.
There is in principle no limitation to the topology or architecture of the ANN; that is, the number and ways in which neurons are connected to each other. However, already in the early days of AI it was realized that sticking to some particular families of ANN architectures greatly simplified their understanding and their use.
The fundamental (but not the unique) building block of today’s ANNs architectures is the layer of neurons. And the simplest and historically the first example of the use of this concept is the multilayer perceptron network, like the one in the picture below (figure 13-11 in (Glassner 2021 ) ):
A multilayer perceptron network has:
One input layer , which simply contains the (vector of) values of the input data set.
One or more hidden layers of neurons. In modern ANNs there are a number of different types of hidden layers. But for the multilayer perceptron all the hidden layers are of the same type. They are dense (or fully connected) layers , meaning that each neuron in layer \(i\) is connected with every neuron in layer \(i+1\)
And an output layer . The structure of the output layer depends of the nature of the problem that the network is intended to solve. For a regression problem the output layer has a single neuron, that returns the predicted output value for the inputs we have used. In a binary classification problem the output layer will again have a single neuron, but the value is now interpreted as a the score of the positive class, as in other classfication algorithms we have seen. In a multiclass problem the output layer will have more neurons, each one producing the score corresponding to a given class (level) of the output factor.
We are going to discuss in detail all of these components below.
The multilayer perceptron is a particular type of feedforward neural network The term feedforward indicates that the information flows in one direction, from the input to the output (left to right in the above diagram), without any feedback loops or recurrent connections. The multilayer perceptrons are historically the first examples of neural networks. But there are other important members of the feedforward family, such as the radial basis function (RBF) networks, which are related to the gaussian mixture models we have seen. Also the convolutional neural networks (CNN) are technically feedforward networks as well.
Reflections about the ANN architecture and the number of parameters: + How many parameters (weights and bias terms) does the network in the picture contain? + Now change the architecture, but keep the total number of neurons in the hidden layers constant. How does this affect the number of parameters of the network?
Recall that the output of an artificial neuron with inputs \((1, s_1, \ldots, s_k)\) and weights \((1, s_1, \ldots, s_k)\) (note the use of the bias trick) is given by \[h(w_0 + w_1s_1 + w_2s_2 + \cdots + w_ks_k)\] If the activation function is a linear function, the result is still a linear (more precisely affine) function of the neuron inputs. And extending that to the ANN, if all the activation functions in the hidden layers of the network are linear, then the ANN can only address linearly separable problems. This result holds irrespective of the number of layers, the number of neurons of each layer and, immportanly, the activation function of the output layer . This situation is described as the collapse of the network .
To summarize the idea, if we want our ANNs to go beyond linear models we need two things:
1. The network needs to include hidden layers (at least one). 2. The neurons in the hidden layers need to use non-linear activation functions .
If that is the case then it can be formally shown (through a mathematical proof) that a network with even a single hidden layer but with sufficiently many neurons can be used to approximate any continuous function. That is, multilayer perceptrons with non linear activation functions are universal function approximators (see e.g. (Hornik, Stinchcombe, and White 1990 ) and the references therein).
The above does not mean that linear activation functions are never used. For example, for a regression problem the activation function of the output layer is usually a linear function, in fact the identity \(h(x) = x\) , is often used. There are also some more technical uses for linear activations in the hidden layers of neural networks.
Let us review some of the best known and most widely used non linear activation functions.
Sigmoid Activation Function
This is a function that we already know from our work with logistic regression: \[\sigma(x) = \dfrac{1}{1 + e^{-x}}\] Let us use Python to plot it:
Hide the code
x = np.linspace(- 15 , 15 , 100 )
fig, ax = plt.subplots(figsize = (8 , 3 ))
sns.lineplot(x = x, y = 1 / (1 + np.exp(- x)))
Hyperbolic Tangent Activation Function
This function is similar in shape to the logistic curve, but it constrains the values to be in the \([-1, 1]\) interval (instead of the \([-1, 1]\) used by the logistic): \[\tanh(x) = \dfrac{e^{x} + e^{-x}}{e^{x} - e^{-x}}\]
Hide the code
fig, ax = plt.subplots(figsize = (8 , 3 ))
sns.lineplot(x = x, y = np.tanh(x))
RELU (Rectified Linear) Activation Function
Probnly the most popular activation function nowadays, the RELU functions is defined as: \[
RELU(x) =
\begin{cases}
x &\text{ if }x \geq 0\\[3mm]
0 &\text{ if }x < 0\\
\end{cases}
\] which means that the plot is:
Hide the code
def relu(x):
return np.array([max (0 , u) for u in x])
fig, ax = plt.subplots(figsize = (8 , 3 ))
sns.lineplot(x = x, y = relu(x))
As you can see this function is not linear, but piecewise linear . However, this very basic kind of non linearity is enough to prevent the network collapse. And on the other hand the computation of the RELU value is a simple if statement, whereas the previous activation functions are computationally much more expensive. This is one of the reasons why RELU has become so popular in recent years.
Some theoretical misunderstandings related to the network collapse phenomenon combined with the limitations of the hardware led to what was called the first AI winter and research stalled for a long time. Another factor contributing to that decrease of interest in AI was of an algorithmic nature. Some pioneering researchers already knew that multilayer models and non linear activations would solve the issue. But at the time it was not known how to train a multilayer model . A new fundamental algorithm, called backpropagation was needed for that.
In the meantime, a second era of AI methods appeared in the form of so called expert systems in the 1980s. They relied on a completely different methodology: rule-based systems . Those are built on a set of explicit rules or if-then statements that try to encode the knowledge and expertise of human experts. In particular they did not use neural networks. But they turned out to be too hard to maintain, and the expectations, hype and investments soon vanished. The second AI winter and the bursting of the dot-com bubble (driven by speculative investment in internet-related companies during the late 1990s) occurred around the same time period. Thus, for a long time, the AI name vanished from the research vocabulary. But the neural networks researchers had kept working and in 1986 the backpropagation algorithm became popular due to a paper in Nature. It had some success, but it was not until the 2010s, when comparatively cheap yet powerful GPUs became available, that backprop (as it is usually called) found the adequate hardware to unleash its parallel computing capabilities. This has lead to what is sometimes called the current AI spring in the 2020s, and the popularization of some AI products such as LLMs. You can get a privileged perspective on that period by watching this interview with Geoffrey Hinton by Andrew Ng from deeplearning.ai.
But keep the past in mind, for some already think (hope?) that we are doomed for a new AI winter. This column in a major newspaper in Spain Inteligencia Artifical: Nnada que hacer, J. Sampedro, EL PAÍS 2024-04-20. , after expressing concerns about the potential job losses that AI could cause, ended with this paragraph:
“But let’s end with some good news. ChatGPT and other systems of this kind are not going to grow exponentially. These models have been improving so far by devouring texts from the internet (more texts, better results), but they have already swallowed almost everything. Stagnation is approaching.”
The text of the article is in Spanish. Ironically, the above English translation was obtained using ChatGPT.
The output layer of a neural network behaves differently depending on the nature of the problem.
Regression: If we are dealing with a regression problem then the output layer has a single neuron. This neuron will take as inputs the values coming from the last hidden layer, let us call them \[A_1,\ldots, A_k\] and then the output layer will use its set of weights \(w_1, \ldots, w_k\) and bias term to output the affine combination: \[w_0 + w_1 A_1 +\cdots + w_k A_k\] This is the final prediction of the model. You may wonder about the activation function for this output layer, but the above is equivalent to using the identity \(h(x) = x\) as activation, as we mentioned before.
Binary Classification: In this case we want to use the output as a score for the positive class of the output variable. In particular we only need the neural network to predict a aingle number. Therefore in this case again the output layer contains a single neuron. The only difference with the previous regression case is that now the activation function is not the identity but the sigmoid curve we have introduced above: \[\sigma(x) = \dfrac{1}{1 + e^{-x}}\]
Multiclass Classification and Softmax Function: This is a different scenario, because of the output variable is a factor with \(p\) levels, then the network needs to predict scores for each of them. And so in this case the output layer contains \(p\) neurons. The outputs \(u_1, u_2, \ldots, u_p\) of these neurons are then typically converted into a new set of numbers \(p_1, p_2,\ldots, p_k\) by using a softmax transformation defined by: \[p_1 = \dfrac{e^{u_1}}{e^{u_1} + e^{u_2} + \cdots + e^{u_p}},\quad
p_2 = \dfrac{e^{u_2}}{e^{u_1} + e^{u_2} + \cdots + e^{u_p}},\quad
\ldots \quad
, \quad p_k = \dfrac{e^{u_k}}{e^{u_1} + e^{u_2} + \cdots + e^{u_p}}
\] This transformation clearly implies that the numbers \(p_i\) verify \[0\leq p1, p_2, \ldots, p_k\leq 1\qquad\text{and}\qquad p_1 + p_2 + \cdots + p_k = 1\] and so they are easy to interpret as (usually uncalibrated) probabilities for the \(k\) output factor classes.
Multilayer Perceptron Examples in scikit-learn
Now that we have an initial idea of the architecture of the MLP, we will begin by using sklearn to train a neural network (a MLP in this example) for a binary classification example that we have already used in previous sessions. Keep in mind that sklearn provides implementations only for the most classical and basic ANN architectures, which are precisely the MLPs. For more sophisticated architectures (and to get the most of the hardware) we need to move our work to deep learning libraries such as Tensorflow/Keras or PyTorch .
Let us begin by recreating the example dataset. To make things more interesting we select an example where a linear boundary is clearly not sufficient.
Hide the code
X, Y = make_circles(n_samples= 1000 , factor= .2 , noise= 0.15 , random_state= 2024 )
inputs = ["X" + str (k) for k in range (X.shape[1 ])]
output = "Y"
XTR, XTS, YTR, YTS = train_test_split(X, Y,
test_size= 0.2 , # percentage preserved as test data
random_state= 1 , # seed for replication
stratify = Y) # Preserves distribution of y
dfTR = pd.DataFrame(XTR, columns= inputs)
dfTR[output] = YTR
dfTS = pd.DataFrame(XTS, columns= inputs)
dfTS[output] = YTS
fig, ax = plt.subplots(figsize = (5 , 5 ))
sns.scatterplot(dfTR, x= inputs[0 ], y= inputs[1 ], hue= output, ax= ax)
plt.show()
Next we use the MLPClassifier class in sklearn to train a MLP. To define the model we need to describe the architecture of the neural network. In the case of MLPs, this amounts to defining the number of hidden layers and the number of neurons they contain. For this example we are going to use two hidden layers, and we describe them by using a tuple with the number of neurons in each layer.
Then we also need to select the activation function used by those hidden layers. The MLP implementation in sklearn only allows for chosing a fixed type of activation, that will be applied to all hidden layers. The activation for the output layer is then automatically selected depending on the output variable (a sigmoid for this binary classification problem).
Another important parameter is the number of fitting iterations. We have not discussed yet how we fit the neural network, but it should come as no surprise that this is an iterative process. If the default number of iterations is not enough to deliver convergence, we try to increase it.
Hide the code
# Create and train the MLPClassifier
mlp_binary = MLPClassifier(hidden_layer_sizes= (8 , 6 ), activation= "relu" , max_iter= 10000 , random_state= 2024 )
mlp_binary.fit(XTR, YTR)
model = mlp_binary
model_name = "mlp_binary"
Let us also add the model to a dictionary.
Hide the code
make_circles_model = {'mlp_binary' :mlp_binary}
The fitted model object contains information about the MLP weights and bias terms in the coefs_ and intercepts_ properties. The coefs_ is a list of arrays, describing the weights that connect each layer to the preceding one. We can inspect their shapes with
Hide the code
[layer.shape for layer in model.coefs_]
The first describes the 16 weights connecting the two inputs (point coordinates) to each of the eight neurons of the first hidden layer. Then the 48 weights connecting the two hidden layers are contained in
Hide the code
array([[ 8.77310919e-02, 1.67391652e-01, -9.11823710e-01,
-1.63793552e+00, -1.34193208e+00, -5.41714423e-01,
1.44105686e+00, 2.38045784e-01],
[ 3.55949269e-03, -2.48328243e-05, 4.50855527e-01,
7.82153097e-01, -1.08781219e+00, 2.08245911e+00,
6.89513290e-01, 1.29103564e-01]])
And finally there are six weights connecting the neurons of the second hidden layer to the single neuron in the outout layer (for this binary classification problem).
Inspect the intercepts_ and make sure that you understand their meaning.
This should be very easy now: how many parameters (weights and bias terms) does the network in this model contain?
Execute the next cell to run the external script containing the mlp_draw function. We provide this function (that uses the networkx library) to make it easier to visualize small MLP models trained with sklearn.
Now we only need to feed the fitted model to function. The orientation,figsize, label_pos and label_size allow some customization of the result. Check the architecture of the network in the plot, and see if you can recognize some of the weights that we saw before.
Hide the code
mlp_draw(model= model, orientation= "h" , label_pos= 2.3 / 7 , add_labels= True )
Another interesting visualization is the decision boundary of this binary classifier that clearly shows that it has learned a nonlinear solution.
Hide the code
disp = DecisionBoundaryDisplay.from_estimator(model, XTR, response_method= "predict" , cmap= "Greens" , alpha = 0.5 )
sns.scatterplot(dfTR, x= inputs[0 ], y= inputs[1 ], hue= output)
plt.show()
Change the activation function to identity instead of relu (read the sklearn documentation to understand what we are doing). Then refit the MLP and repeat the boundary decision plot. What happens?
Let us now evaluate the classifier performance, using our standard toolset:
Hide the code
# Dataset for Training Predictions
dfTR_eval = dfTR.copy()
# Store the actual predictions
newCol = 'Y_' + model_name + '_prob_neg' ;
dfTR_eval[newCol] = model.predict_proba(XTR)[:, 0 ]
newCol = 'Y_' + model_name + '_prob_pos' ;
dfTR_eval[newCol] = model.predict_proba(XTR)[:, 1 ]
newCol = 'Y_' + model_name + '_pred' ;
dfTR_eval[newCol] = model.predict(XTR)
Hide the code
# Test predictions dataset
dfTS_eval = dfTS.copy()
newCol = 'Y_' + model_name + '_prob_neg' ;
dfTS_eval[newCol] = model.predict_proba(XTS)[:, 0 ]
newCol = 'Y_' + model_name + '_prob_pos' ;
dfTS_eval[newCol] = model.predict_proba(XTS)[:, 1 ]
newCol = 'Y_' + model_name + '_pred' ;
dfTS_eval[newCol] = model.predict(XTS)
This generates the confusion matrices and shows that classification is almost perfect both in training and test.
Hide the code
from sklearn.metrics import confusion_matrix, ConfusionMatrixDisplay
fig = plt.figure(constrained_layout= True , figsize= (6 , 2 ))
spec = fig.add_gridspec(1 , 3 )
ax1 = fig.add_subplot(spec[0 , 0 ]); ax1.set_title('Training' ); ax1.grid(False )
ax2 = fig.add_subplot(spec[0 , 2 ]); ax2.set_title('Test' ); ax2.grid(False )
ConfusionMatrixDisplay.from_estimator(model, XTR, YTR, cmap= "Greens" , colorbar= False , ax= ax1, labels= [1 , 0 ])
ConfusionMatrixDisplay.from_estimator(model, XTS, YTS, cmap= "Greens" , colorbar= False , ax= ax2, labels= [1 , 0 ])
plt.suptitle("Confusion Matrices for " + model_name)
plt.show();
The ROC curves tell a similar story. We satisfy ourselves that this looks good enough, and will not spend more time on performance measures. Feel free to dig deeper, you have all the tools from previous sessions.
Hide the code
from sklearn.metrics import RocCurveDisplay
fig = plt.figure(figsize= (12 , 4 ))
spec = fig.add_gridspec(1 , 2 )
ax1 = fig.add_subplot(spec[0 , 0 ]); ax1.set_title('Training' )
ax2 = fig.add_subplot(spec[0 , 1 ]); ax2.set_title('Test' )
RocCurveDisplay.from_estimator(model, XTR, YTR, plot_chance_level= True , ax= ax1)
RocCurveDisplay.from_estimator(model, XTS, YTS, plot_chance_level= True , ax= ax2);
plt.suptitle("ROC Curves for " + model_name)
plt.show();
Using Tensorflow / Keras to fit the neural network
The code below will not produce any useful result until you update your environment. You need to install the tensorflow library. The precise instructions depend on your operating system and the hardware in your computer. For example, if you have a GPU you may want to install a GPU capable version of tensorflow.
As of March 2025, for compatibility reasons we recommend installing the tensorflow library version 2.12.0 (the most recent version is 2.18).
Hide the code
try :
import tensorflow as tf
print ("Tensorflow installed" , tf.__version__)
tf_ok = True
# Check if GPU is available
print ("GPU" , "available (YES!)" if tf.config.list_physical_devices('GPU' ) else "not available :(" )
except ModuleNotFoundError :
print ("Tensorflow not installed. Please install it first (backup your environment first)" )
tf_ok = False
Tensorflow installed 2.12.0
GPU available (YES!)
Hide the code
if tf_ok:
X_train, X_valid, y_train, y_valid = train_test_split(XTR, YTR, test_size= 0.2 , random_state= 2025 )
# tf.random.set_seed(2025)
tf.keras.utils.set_random_seed(2025 )
# norm_layer = tf.keras.layers.Normalization(input_shape=X_train.shape[1:])
model = tf.keras.Sequential([
# tf.keras.layers.Normalization(input_shape=X_train.shape[1:]),
tf.keras.layers.InputLayer(input_shape= X_train.shape[1 :], name= "input_layer" ),
tf.keras.layers.Dense(8 , activation= "relu" , name= "hidden_layer_1" ),
tf.keras.layers.Dense(6 , activation= "relu" , name= "hidden_layer_2" ),
tf.keras.layers.Dense(1 , activation= "sigmoid" , name= "output_layer" )
])
keras_version = tf.keras.__version__
print (f"Installed TensorFlow Keras version: { keras_version} " )
reference_version = "3.8.0"
from packaging.version import parse
if parse(keras_version) >= parse(reference_version):
optimizer = tf.keras.optimizers.Adam(learning_rate= 1e-3 )
else : # For Macs with Apple Silicon M1, M2, etc.
optimizer = tf.keras.optimizers.legacy.Adam(learning_rate= 1e-3 )
model.compile (loss= "binary_crossentropy" , optimizer= optimizer, metrics= ["accuracy" ])
# norm_layer.adapt(X_train)
history = model.fit(X_train, y_train, epochs= 100 , batch_size= 16 ,
validation_data= (X_valid, y_valid))
_, accuracy_test = model.evaluate(XTS, YTS)
y_pred = model.predict(XTS)
print ("accuracy_test" , accuracy_test)
history_df = pd.DataFrame(history.history)
history_df.plot(figsize= (8 , 5 ))
model_name = "mlp_tf"
newCol = 'Y_' + model_name + '_prob_neg' ;
dfTS_eval[newCol] = 1 - y_pred
newCol = 'Y_' + model_name + '_prob_pos' ;
dfTS_eval[newCol] = y_pred
newCol = 'Y_' + model_name + '_pred' ;
threshold = 0.5
dfTS_eval[newCol] = (y_pred > threshold).astype(int )
Metal device set to: Apple M2
systemMemory: 16.00 GB
maxCacheSize: 5.33 GB
Installed TensorFlow Keras version: 2.12.0
Epoch 1/100
2025-03-11 11:15:20.650633: W tensorflow/tsl/platform/profile_utils/cpu_utils.cc:128] Failed to get CPU frequency: 0 Hz
40/40 [==============================] - 1s 7ms/step - loss: 0.6629 - accuracy: 0.5406 - val_loss: 0.6406 - val_accuracy: 0.6812
Epoch 2/100
40/40 [==============================] - 0s 4ms/step - loss: 0.6446 - accuracy: 0.5984 - val_loss: 0.6183 - val_accuracy: 0.7563
Epoch 3/100
40/40 [==============================] - 0s 5ms/step - loss: 0.6235 - accuracy: 0.6797 - val_loss: 0.5910 - val_accuracy: 0.8188
Epoch 4/100
40/40 [==============================] - 0s 4ms/step - loss: 0.5978 - accuracy: 0.7391 - val_loss: 0.5577 - val_accuracy: 0.8625
Epoch 5/100
40/40 [==============================] - 0s 4ms/step - loss: 0.5638 - accuracy: 0.8687 - val_loss: 0.5224 - val_accuracy: 0.9375
Epoch 6/100
40/40 [==============================] - 0s 4ms/step - loss: 0.5265 - accuracy: 0.9187 - val_loss: 0.4825 - val_accuracy: 0.9438
Epoch 7/100
40/40 [==============================] - 0s 4ms/step - loss: 0.4860 - accuracy: 0.9297 - val_loss: 0.4406 - val_accuracy: 0.9563
Epoch 8/100
40/40 [==============================] - 0s 4ms/step - loss: 0.4427 - accuracy: 0.9547 - val_loss: 0.3995 - val_accuracy: 0.9625
Epoch 9/100
40/40 [==============================] - 0s 4ms/step - loss: 0.3994 - accuracy: 0.9672 - val_loss: 0.3609 - val_accuracy: 0.9875
Epoch 10/100
40/40 [==============================] - 0s 4ms/step - loss: 0.3576 - accuracy: 0.9750 - val_loss: 0.3238 - val_accuracy: 0.9875
Epoch 11/100
40/40 [==============================] - 0s 7ms/step - loss: 0.3168 - accuracy: 0.9859 - val_loss: 0.2860 - val_accuracy: 0.9875
Epoch 12/100
40/40 [==============================] - 0s 4ms/step - loss: 0.2780 - accuracy: 0.9875 - val_loss: 0.2545 - val_accuracy: 0.9937
Epoch 13/100
40/40 [==============================] - 0s 4ms/step - loss: 0.2424 - accuracy: 0.9922 - val_loss: 0.2238 - val_accuracy: 0.9937
Epoch 14/100
40/40 [==============================] - 0s 4ms/step - loss: 0.2103 - accuracy: 0.9953 - val_loss: 0.1961 - val_accuracy: 0.9937
Epoch 15/100
40/40 [==============================] - 0s 4ms/step - loss: 0.1820 - accuracy: 0.9937 - val_loss: 0.1727 - val_accuracy: 0.9937
Epoch 16/100
40/40 [==============================] - 0s 4ms/step - loss: 0.1581 - accuracy: 0.9953 - val_loss: 0.1533 - val_accuracy: 0.9937
Epoch 17/100
40/40 [==============================] - 0s 4ms/step - loss: 0.1374 - accuracy: 0.9953 - val_loss: 0.1394 - val_accuracy: 0.9937
Epoch 18/100
40/40 [==============================] - 0s 4ms/step - loss: 0.1205 - accuracy: 0.9969 - val_loss: 0.1271 - val_accuracy: 0.9875
Epoch 19/100
40/40 [==============================] - 0s 5ms/step - loss: 0.1066 - accuracy: 0.9969 - val_loss: 0.1122 - val_accuracy: 0.9937
Epoch 20/100
40/40 [==============================] - 0s 4ms/step - loss: 0.0949 - accuracy: 0.9969 - val_loss: 0.1034 - val_accuracy: 0.9937
Epoch 21/100
40/40 [==============================] - 0s 4ms/step - loss: 0.0851 - accuracy: 0.9969 - val_loss: 0.0957 - val_accuracy: 0.9937
Epoch 22/100
40/40 [==============================] - 0s 4ms/step - loss: 0.0769 - accuracy: 0.9969 - val_loss: 0.0885 - val_accuracy: 0.9937
Epoch 23/100
40/40 [==============================] - 0s 4ms/step - loss: 0.0702 - accuracy: 0.9984 - val_loss: 0.0847 - val_accuracy: 0.9875
Epoch 24/100
40/40 [==============================] - 0s 4ms/step - loss: 0.0640 - accuracy: 0.9969 - val_loss: 0.0778 - val_accuracy: 0.9875
Epoch 25/100
40/40 [==============================] - 0s 4ms/step - loss: 0.0589 - accuracy: 0.9984 - val_loss: 0.0752 - val_accuracy: 0.9875
Epoch 26/100
40/40 [==============================] - 0s 4ms/step - loss: 0.0546 - accuracy: 0.9984 - val_loss: 0.0706 - val_accuracy: 0.9875
Epoch 27/100
40/40 [==============================] - 0s 4ms/step - loss: 0.0505 - accuracy: 0.9984 - val_loss: 0.0675 - val_accuracy: 0.9875
Epoch 28/100
40/40 [==============================] - 0s 4ms/step - loss: 0.0473 - accuracy: 0.9984 - val_loss: 0.0655 - val_accuracy: 0.9875
Epoch 29/100
40/40 [==============================] - 0s 4ms/step - loss: 0.0441 - accuracy: 0.9984 - val_loss: 0.0612 - val_accuracy: 0.9875
Epoch 30/100
40/40 [==============================] - 0s 4ms/step - loss: 0.0418 - accuracy: 0.9984 - val_loss: 0.0582 - val_accuracy: 0.9937
Epoch 31/100
40/40 [==============================] - 0s 4ms/step - loss: 0.0392 - accuracy: 0.9984 - val_loss: 0.0558 - val_accuracy: 0.9937
Epoch 32/100
40/40 [==============================] - 0s 4ms/step - loss: 0.0369 - accuracy: 0.9984 - val_loss: 0.0558 - val_accuracy: 0.9875
Epoch 33/100
40/40 [==============================] - 0s 4ms/step - loss: 0.0349 - accuracy: 0.9984 - val_loss: 0.0536 - val_accuracy: 0.9875
Epoch 34/100
40/40 [==============================] - 0s 5ms/step - loss: 0.0334 - accuracy: 0.9984 - val_loss: 0.0512 - val_accuracy: 0.9875
Epoch 35/100
40/40 [==============================] - 0s 4ms/step - loss: 0.0321 - accuracy: 0.9984 - val_loss: 0.0508 - val_accuracy: 0.9875
Epoch 36/100
40/40 [==============================] - 0s 4ms/step - loss: 0.0302 - accuracy: 0.9984 - val_loss: 0.0476 - val_accuracy: 0.9937
Epoch 37/100
40/40 [==============================] - 0s 4ms/step - loss: 0.0290 - accuracy: 0.9984 - val_loss: 0.0492 - val_accuracy: 0.9812
Epoch 38/100
40/40 [==============================] - 0s 4ms/step - loss: 0.0278 - accuracy: 0.9984 - val_loss: 0.0462 - val_accuracy: 0.9937
Epoch 39/100
40/40 [==============================] - 0s 4ms/step - loss: 0.0271 - accuracy: 0.9984 - val_loss: 0.0439 - val_accuracy: 0.9937
Epoch 40/100
40/40 [==============================] - 0s 4ms/step - loss: 0.0257 - accuracy: 0.9984 - val_loss: 0.0437 - val_accuracy: 0.9875
Epoch 41/100
40/40 [==============================] - 0s 4ms/step - loss: 0.0247 - accuracy: 0.9984 - val_loss: 0.0427 - val_accuracy: 0.9937
Epoch 42/100
40/40 [==============================] - 0s 4ms/step - loss: 0.0239 - accuracy: 0.9984 - val_loss: 0.0460 - val_accuracy: 0.9812
Epoch 43/100
40/40 [==============================] - 0s 4ms/step - loss: 0.0232 - accuracy: 0.9984 - val_loss: 0.0412 - val_accuracy: 0.9875
Epoch 44/100
40/40 [==============================] - 0s 4ms/step - loss: 0.0220 - accuracy: 0.9984 - val_loss: 0.0425 - val_accuracy: 0.9812
Epoch 45/100
40/40 [==============================] - 0s 4ms/step - loss: 0.0215 - accuracy: 0.9984 - val_loss: 0.0418 - val_accuracy: 0.9812
Epoch 46/100
40/40 [==============================] - 0s 4ms/step - loss: 0.0206 - accuracy: 0.9984 - val_loss: 0.0396 - val_accuracy: 0.9875
Epoch 47/100
40/40 [==============================] - 0s 4ms/step - loss: 0.0199 - accuracy: 0.9984 - val_loss: 0.0386 - val_accuracy: 0.9875
Epoch 48/100
40/40 [==============================] - 0s 5ms/step - loss: 0.0195 - accuracy: 0.9984 - val_loss: 0.0371 - val_accuracy: 0.9937
Epoch 49/100
40/40 [==============================] - 0s 4ms/step - loss: 0.0189 - accuracy: 0.9984 - val_loss: 0.0378 - val_accuracy: 0.9812
Epoch 50/100
40/40 [==============================] - 0s 4ms/step - loss: 0.0183 - accuracy: 0.9984 - val_loss: 0.0381 - val_accuracy: 0.9812
Epoch 51/100
40/40 [==============================] - 0s 4ms/step - loss: 0.0178 - accuracy: 0.9984 - val_loss: 0.0372 - val_accuracy: 0.9812
Epoch 52/100
40/40 [==============================] - 0s 4ms/step - loss: 0.0171 - accuracy: 0.9984 - val_loss: 0.0352 - val_accuracy: 0.9875
Epoch 53/100
40/40 [==============================] - 0s 4ms/step - loss: 0.0166 - accuracy: 0.9984 - val_loss: 0.0361 - val_accuracy: 0.9812
Epoch 54/100
40/40 [==============================] - 0s 4ms/step - loss: 0.0161 - accuracy: 0.9984 - val_loss: 0.0336 - val_accuracy: 0.9937
Epoch 55/100
40/40 [==============================] - 0s 4ms/step - loss: 0.0158 - accuracy: 0.9984 - val_loss: 0.0347 - val_accuracy: 0.9812
Epoch 56/100
40/40 [==============================] - 0s 4ms/step - loss: 0.0153 - accuracy: 0.9984 - val_loss: 0.0325 - val_accuracy: 0.9875
Epoch 57/100
40/40 [==============================] - 0s 4ms/step - loss: 0.0148 - accuracy: 0.9984 - val_loss: 0.0330 - val_accuracy: 0.9937
Epoch 58/100
40/40 [==============================] - 0s 4ms/step - loss: 0.0145 - accuracy: 0.9984 - val_loss: 0.0327 - val_accuracy: 0.9875
Epoch 59/100
40/40 [==============================] - 0s 4ms/step - loss: 0.0143 - accuracy: 1.0000 - val_loss: 0.0319 - val_accuracy: 0.9875
Epoch 60/100
40/40 [==============================] - 0s 4ms/step - loss: 0.0139 - accuracy: 0.9984 - val_loss: 0.0320 - val_accuracy: 0.9875
Epoch 61/100
40/40 [==============================] - 0s 4ms/step - loss: 0.0135 - accuracy: 1.0000 - val_loss: 0.0309 - val_accuracy: 0.9875
Epoch 62/100
40/40 [==============================] - 0s 5ms/step - loss: 0.0133 - accuracy: 1.0000 - val_loss: 0.0308 - val_accuracy: 0.9875
Epoch 63/100
40/40 [==============================] - 0s 4ms/step - loss: 0.0127 - accuracy: 0.9984 - val_loss: 0.0299 - val_accuracy: 0.9875
Epoch 64/100
40/40 [==============================] - 0s 4ms/step - loss: 0.0125 - accuracy: 0.9984 - val_loss: 0.0308 - val_accuracy: 0.9875
Epoch 65/100
40/40 [==============================] - 0s 4ms/step - loss: 0.0121 - accuracy: 1.0000 - val_loss: 0.0296 - val_accuracy: 0.9875
Epoch 66/100
40/40 [==============================] - 0s 4ms/step - loss: 0.0119 - accuracy: 1.0000 - val_loss: 0.0288 - val_accuracy: 0.9875
Epoch 67/100
40/40 [==============================] - 0s 4ms/step - loss: 0.0114 - accuracy: 1.0000 - val_loss: 0.0299 - val_accuracy: 0.9875
Epoch 68/100
40/40 [==============================] - 0s 4ms/step - loss: 0.0112 - accuracy: 1.0000 - val_loss: 0.0286 - val_accuracy: 0.9937
Epoch 69/100
40/40 [==============================] - 0s 4ms/step - loss: 0.0110 - accuracy: 1.0000 - val_loss: 0.0276 - val_accuracy: 0.9875
Epoch 70/100
40/40 [==============================] - 0s 4ms/step - loss: 0.0107 - accuracy: 1.0000 - val_loss: 0.0256 - val_accuracy: 0.9937
Epoch 71/100
40/40 [==============================] - 0s 4ms/step - loss: 0.0109 - accuracy: 1.0000 - val_loss: 0.0270 - val_accuracy: 0.9875
Epoch 72/100
40/40 [==============================] - 0s 4ms/step - loss: 0.0104 - accuracy: 1.0000 - val_loss: 0.0261 - val_accuracy: 0.9875
Epoch 73/100
40/40 [==============================] - 0s 4ms/step - loss: 0.0101 - accuracy: 1.0000 - val_loss: 0.0266 - val_accuracy: 0.9875
Epoch 74/100
40/40 [==============================] - 0s 4ms/step - loss: 0.0098 - accuracy: 1.0000 - val_loss: 0.0256 - val_accuracy: 0.9937
Epoch 75/100
40/40 [==============================] - 0s 4ms/step - loss: 0.0096 - accuracy: 1.0000 - val_loss: 0.0251 - val_accuracy: 0.9875
Epoch 76/100
40/40 [==============================] - 0s 5ms/step - loss: 0.0093 - accuracy: 1.0000 - val_loss: 0.0259 - val_accuracy: 0.9875
Epoch 77/100
40/40 [==============================] - 0s 4ms/step - loss: 0.0092 - accuracy: 1.0000 - val_loss: 0.0251 - val_accuracy: 0.9875
Epoch 78/100
40/40 [==============================] - 0s 4ms/step - loss: 0.0092 - accuracy: 1.0000 - val_loss: 0.0250 - val_accuracy: 0.9937
Epoch 79/100
40/40 [==============================] - 0s 4ms/step - loss: 0.0090 - accuracy: 1.0000 - val_loss: 0.0239 - val_accuracy: 0.9875
Epoch 80/100
40/40 [==============================] - 0s 4ms/step - loss: 0.0087 - accuracy: 1.0000 - val_loss: 0.0246 - val_accuracy: 0.9875
Epoch 81/100
40/40 [==============================] - 0s 4ms/step - loss: 0.0087 - accuracy: 1.0000 - val_loss: 0.0242 - val_accuracy: 0.9937
Epoch 82/100
40/40 [==============================] - 0s 4ms/step - loss: 0.0082 - accuracy: 1.0000 - val_loss: 0.0227 - val_accuracy: 0.9937
Epoch 83/100
40/40 [==============================] - 0s 4ms/step - loss: 0.0082 - accuracy: 1.0000 - val_loss: 0.0242 - val_accuracy: 0.9875
Epoch 84/100
40/40 [==============================] - 0s 4ms/step - loss: 0.0080 - accuracy: 1.0000 - val_loss: 0.0224 - val_accuracy: 0.9937
Epoch 85/100
40/40 [==============================] - 0s 4ms/step - loss: 0.0078 - accuracy: 1.0000 - val_loss: 0.0231 - val_accuracy: 0.9875
Epoch 86/100
40/40 [==============================] - 0s 4ms/step - loss: 0.0079 - accuracy: 1.0000 - val_loss: 0.0232 - val_accuracy: 0.9875
Epoch 87/100
40/40 [==============================] - 0s 4ms/step - loss: 0.0078 - accuracy: 1.0000 - val_loss: 0.0224 - val_accuracy: 0.9875
Epoch 88/100
40/40 [==============================] - 0s 4ms/step - loss: 0.0074 - accuracy: 1.0000 - val_loss: 0.0229 - val_accuracy: 0.9875
Epoch 89/100
40/40 [==============================] - 0s 5ms/step - loss: 0.0073 - accuracy: 1.0000 - val_loss: 0.0220 - val_accuracy: 0.9937
Epoch 90/100
40/40 [==============================] - 0s 4ms/step - loss: 0.0072 - accuracy: 1.0000 - val_loss: 0.0206 - val_accuracy: 0.9937
Epoch 91/100
40/40 [==============================] - 0s 4ms/step - loss: 0.0071 - accuracy: 1.0000 - val_loss: 0.0215 - val_accuracy: 0.9875
Epoch 92/100
40/40 [==============================] - 0s 4ms/step - loss: 0.0070 - accuracy: 1.0000 - val_loss: 0.0204 - val_accuracy: 0.9937
Epoch 93/100
40/40 [==============================] - 0s 4ms/step - loss: 0.0070 - accuracy: 1.0000 - val_loss: 0.0224 - val_accuracy: 0.9937
Epoch 94/100
40/40 [==============================] - 0s 4ms/step - loss: 0.0067 - accuracy: 1.0000 - val_loss: 0.0213 - val_accuracy: 0.9875
Epoch 95/100
40/40 [==============================] - 0s 4ms/step - loss: 0.0065 - accuracy: 1.0000 - val_loss: 0.0202 - val_accuracy: 0.9937
Epoch 96/100
40/40 [==============================] - 0s 4ms/step - loss: 0.0065 - accuracy: 1.0000 - val_loss: 0.0212 - val_accuracy: 0.9875
Epoch 97/100
40/40 [==============================] - 0s 4ms/step - loss: 0.0064 - accuracy: 1.0000 - val_loss: 0.0191 - val_accuracy: 0.9937
Epoch 98/100
40/40 [==============================] - 0s 4ms/step - loss: 0.0063 - accuracy: 1.0000 - val_loss: 0.0199 - val_accuracy: 0.9937
Epoch 99/100
40/40 [==============================] - 0s 4ms/step - loss: 0.0060 - accuracy: 1.0000 - val_loss: 0.0222 - val_accuracy: 0.9875
Epoch 100/100
40/40 [==============================] - 0s 4ms/step - loss: 0.0060 - accuracy: 1.0000 - val_loss: 0.0202 - val_accuracy: 0.9937
7/7 [==============================] - 0s 12ms/step - loss: 0.0206 - accuracy: 0.9950
7/7 [==============================] - 0s 3ms/step
accuracy_test 0.9950000047683716
Hide the code
tf.keras.utils.plot_model(model, show_shapes= True )
Hide the code
if tf_ok:
# Get weights and biases for the first hidden layer
layer = model.get_layer("hidden_layer_1" )
weights, biases = layer.get_weights()
# Convert weights to Pandas DataFrame
df_weights = pd.DataFrame(weights, columns= [f"Neuron_ { i} " for i in range (weights.shape[1 ])])
print ("First hidden layer weights" )
display(df_weights.head())
# Get weights and biases for the second hidden layer
layer = model.get_layer("hidden_layer_2" )
weights, biases = layer.get_weights()
# Convert weights to Pandas DataFrame
df_weights = pd.DataFrame(weights, columns= [f"Neuron_ { i} " for i in range (weights.shape[1 ])])
print ("Second hidden layer weights" )
display(df_weights.head())
First hidden layer weights
0
-0.948409
-0.669535
0.464856
-0.346245
-0.093539
1.688672
-1.452033
-0.969496
1
0.559497
1.352146
-1.087582
0.126471
1.423023
0.247543
-0.606401
-1.514628
Second hidden layer weights
0
-0.886917
-0.708015
0.734934
0.905557
0.395608
0.407550
1
-0.712108
-0.922834
0.929950
1.322992
0.903260
0.736547
2
-0.974036
-1.082631
0.982491
1.286668
0.936544
1.329405
3
1.971897
1.709239
-0.612694
-0.547844
-0.618196
-0.736384
4
-0.546009
-0.220251
0.392202
0.562925
1.094411
0.980211
Hide the code
if tf_ok:
disp = ConfusionMatrixDisplay.from_predictions(dfTS_eval['Y_mlp_binary_pred' ], dfTS_eval['Y_mlp_tf_pred' ], cmap= "Greens" )
disp.ax_.set_xlabel('Scikit MLP prediction' )
disp.ax_.set_ylabel('Keras prediction' )
plt.show()
To illustrate this case we will train a MLP that uses the architecture in Figure 13-11 of (Glassner 2021 ) that we have seen before (reproduced below for convenience).
This MLP architecture can be used to classify the samples in the classical iris dataset. Recall that this dataset uses as inputs four morphological characteristics of the iris flowers, while the output factor Species has three levels.
Hide the code
iris = datasets.load_iris()
X = iris["data" ]
X_names = ["sepal_length" , "sepal_width" , "petal_length" , "petal_width" ]
df = pd.DataFrame(X, columns= X_names)
Y = "species"
df[Y] = iris["target" ]
df.head()
0
5.1
3.5
1.4
0.2
0
1
4.9
3.0
1.4
0.2
0
2
4.7
3.2
1.3
0.2
0
3
4.6
3.1
1.5
0.2
0
4
5.0
3.6
1.4
0.2
0
Therefore, it requires a MLP model with four inputs and three output neurons. Let us create the train test split and the datasets and variables that we will use. Note that we are scaling the input data.
Hide the code
inputs = X_names
output = Y
XTR, XTS, YTR, YTS = train_test_split(df[inputs], df[Y],
test_size= 0.2 , # percentage preserved as test data
random_state= 1 , # seed for replication
stratify = df[Y]) # Preserves distribution of y
iris_scaler = StandardScaler()
XTR = iris_scaler.fit_transform(XTR)
XTS = iris_scaler.transform(XTS)
dfTR = pd.DataFrame(XTR, columns= inputs)
dfTR[output] = YTR
dfTS = pd.DataFrame(XTS, columns= inputs)
dfTS[output] = YTS
Now we can define the model’s architecture and train it. Note that we only care about the number and size of the hidden layers. The classifier will look at the number of levels of the output and use the appropriate size and softmax function for the output layer.
Hide the code
mlp_multi = MLPClassifier(hidden_layer_sizes= (3 , 2 ), activation= "relu" , max_iter= 6000 , random_state= 2024 )
mlp_multi.fit(XTR, YTR)
model = mlp_multi
model_name = "mlp_multi"
Let us save the data and model to disk.
Hide the code
iris_data = {'XTR' :XTR, 'XTS' :XTS, 'YTR' :YTR, 'YTS' :YTS,
'inputs' :inputs, 'output' :output, 'iris_scaler' :iris_scaler}
# with open('iris_data.pkl', 'wb') as file:
# pickle.dump(iris_data, file)
iris_model = {'mlp_multi' :mlp_multi}
# with open('iris_model.pkl', 'wb') as file:
# pickle.dump(iris_model, file)
After the fit we can confirm the expected structure of the weights for this architecture:
Hide the code
[lyr.shape for lyr in model.coefs_]
Now let us use the mlp_draw function to visualize the fitted network and weights (recall, bias terms are omitted).
Hide the code
mlp_draw(model= model, orientation= "h" , label_pos= 1.3 / 7 )
The score predictions of this MLP, as expected, appear as tuples of three values, the scores for each of the three iris species corresponding to the data point being classified. For example, the first flower in the test set gets these scores. The class is assigned using the highest score.
Hide the code
np.round (model.predict_proba(XTS)[0 , :], 3 )
array([0. , 0.001, 0.999])
Hide the code
predicted_class = model.predict(XTS)[0 ]
iris["target_names" ][predicted_class]
And in fact you can see that this very simple MLP model achieves almost perfect prediction on the test set.
Hide the code
pd.crosstab(model.predict(XTS), YTS)
row_0
0
10
0
0
1
0
10
1
2
0
0
9
The main goal of this example is to illustrate that there is not much of a difference between binary and multiclass MLP models, so we move on to another Machine Learning task.
Regression with Neural Networks
We will create a synthetic dataset for a simple regression problem with just one numeric input, but whith a clearly non linear relation with the output.
Hide the code
rng = np.random.default_rng(2024 )
n = 1000
X = np.linspace(start= 0 , stop= 5 , num = n)
Y = X/ 2 + np.sin(X) * np.cos(2 * X) + 0.3 * rng.normal(size = n)
df = pd.DataFrame({"X" :X, "Y" :Y})
fig, ax = plt.subplots(figsize= (8 , 3 ))
# sns.lineplot(x = X, y = Y, ax=ax)
sns.scatterplot(data = df, x = "X" , y = "Y" , ax= ax, s= 5 )
Next we define the datasets and variables we are going to use:
Hide the code
inputs = ["X" ]
output = "Y"
XTR, XTS, YTR, YTS = train_test_split(df[inputs], df[output],
test_size= 0.2 , # percentage preserved as test data
random_state= 1 )
dfTR = pd.DataFrame(XTR, columns= inputs)
dfTR[output] = YTR
dfTS = pd.DataFrame(XTS, columns= inputs)
dfTS[output] = YTS
And we create a MLP model with two hidden layers. We are using a higher number of neurons here and we have switched to MLPRegressor . But note that again we do not need to describe the output layer in any way, sklearn uses the approprate values for a regression problem. Note also that we are scaling the inputs to get a better behavior in model fit. And in order to do that we use the pipeline framework we know.
Hide the code
mlp_reg = MLPRegressor(hidden_layer_sizes= (80 , 20 ), activation= "relu" , random_state= 2024 , max_iter= 2000 )
reg_scaler = StandardScaler()
reg_scaler.set_output(transform= "pandas" )
mlp_reg_pipe = Pipeline(steps= [('scaler' , reg_scaler),
('mlp' , mlp_reg)])
mlp_reg_pipe.fit(XTR, YTR)
model = mlp_reg_pipe
model_name = "mlp_reg"
Let us save the data and model to disk (in Python’s pickle format).
Hide the code
mlp_reg_data = {'XTR' :XTR, 'XTS' :XTS, 'YTR' :YTR, 'YTS' :YTS,
'inputs' :inputs, 'output' :output, 'reg_scaler' :reg_scaler}
# with open('mlp_reg_data.pkl', 'wb') as file:
# pickle.dump(mlp_reg_data, file)
mlp_reg_model = {'mlp_reg' :mlp_reg_pipe}
# with open('mlp_reg_model.pkl', 'wb') as file:
# pickle.dump(mlp_reg_model, file)
Let us know look at the model predictions. To visualize the result and get a qualitative measure of the model fit we plot the fitted values against the original data.
Hide the code
# Dataset for Training Predictions
dfTR_eval = dfTR.copy()
# Store the actual predictions
newCol = 'Y_' + model_name + '_pred' ;
dfTR_eval[newCol] = model.predict(XTR)
Hide the code
fig, ax = plt.subplots(figsize= (8 , 3 ))
sns.scatterplot(data= dfTR_eval, x = "X" , y = "Y" , color= "r" , ax= ax)
sns.scatterplot(data= dfTR_eval, x = "X" , y = newCol, color= "b" , ax= ax)
The same kind of plot for the test set illustrates that the model is indeed learning this non linear signal.
Hide the code
# Dataset for Training Predictions
dfTS_eval = dfTS.copy()
# Store the actual predictions
newCol = 'Y_' + model_name + '_pred' ;
dfTS_eval[newCol] = model.predict(XTS)
fig, ax = plt.subplots(figsize= (8 , 3 ))
sns.scatterplot(data= dfTS_eval, x = "X" , y = "Y" , color= "r" , ax= ax)
sns.scatterplot(data= dfTS_eval, x = "X" , y = newCol, color= "b" , ax= ax)
A plot of the network architecture to illustrate the limitations of this visualization approach.
Hide the code
mlp_draw(model["mlp" ], orientation= "h" , figsize= (16 , 16 ), alpha= 0.5 , add_labels= False )
But remember that you can always explore the weights and bias terms through the model.
How many parameters are there in this, our largest MLP so far?
Training MLPs through Gradient Descent and Backpropagation
How is a MLP model trained? An admittedly short description of the process follows. The rest of this section is devoted to provide details about each of these steps (not necessarily in the order they appear below):
We choose a loss function , that measures the error in the model’s fit to the traing data. Our ideal training goal is to bring the loss as close to zero as possible, while at the same time keeping overfitting under control.
We choose initial values for the weights and bias terms of the model. Spoiler: random choosing is not a good idea!
This and the next step are performed iteratively until some stopping criterion is met. We use the training data or fractions of it called batches to get a prediction of the model and therefore we get an evaluation of the loss function for the current choice of the model parameters. This step is called the forward pass because the information in the network flows forward (following the arrows).
Next we use the values collected in the forward pass and the backpropagation algorithm to compute the gradient of the loss function with respect the parameters in each one of the network layers. With this gradient and some optimization method like gradient descent and its relatives we uodate all the weights and bias of the model.
The loss function measures the error makes in its predictions. And, as you already know, we use different expressions of the error depending on the nature of the model’s predictions. That is, depending on the kind of Supervised Machine Learning problem.
For regression the most commonly used loss functions are the \({\cal L}_{MAE}\) and \({\cal L}_{MSE}\) , defined as: \[
{\cal L}_{MAE} = \dfrac{1}{n}\sum_{i = 1}^n|\hat y_i - y_i|,\qquad
{\cal L}_{MSE} = \dfrac{1}{2n}\sum_{i = 1}^n (\hat y_i - y_i)^2
\] Here \(y_i\) is a true/observed value and \(\hat y_i\) is the corresponding prediction of the model.
For classification (either binary or multiclass) the most widely used loss function is the cross-entropy , defined as: \[{\cal L}_{ent} = -\sum_{i = 1}^n\sum_{j=1}^m y_{ij}\log\hat y_{ij}\] where \(y_{i1}, y_{i2}, \ldots, y_{im}\) is the one hot encoding of the \(i-th\) input and \(\hat y_{i1}, \hat y_{i2}, \ldots, \hat y_{im}\) the scores assigned by the model, for each of the \(m\) levels/classes of the output factor. For the binary case this boils down to \[{\cal L}_{ent} = -\sum_{i = 1}^n (y_j\log\hat y_j + (1 - y_j)\log(1 - \hat y_j))\]
By the way, MLPRegressor always uses what we call \({\cal L}_{MSE}\) , while MLPClassifier always uses the cross-entropy \({\cal L}_{ent}\) . In other libraries there is more freedom to choose other loss functions or even define your own, taylored to the problem (e.g. to use different weights for the classes).
All these loss functions share some properties that are also critical for their use with the optimization algorithms we use while fitting the models: 1. They are differentiable functions with respect to their arguments. 2. They are sums over a set of observations. And therefore they can be estimated using any number of observations.
Let us emphasize this idea by playing with the code. Take the training data and fitted model of regression example:
Hide the code
YTR = mlp_reg_data["YTR" ]
XTR = mlp_reg_data["XTR" ]
model = mlp_reg_model["mlp_reg" ]
Now let us use this data and model to get the predictions for training, that correspond to what we called \(\hat y_i\) above. We show the first ones.
Hide the code
YTR_pred = model.predict(XTR)
YTR_pred[:10 ]
array([ 0.26670396, 3.69201112, 3.6049855 , 0.37045803, 1.58530131,
1.54973299, 0.36139089, 0.0949725 , -0.18409113, 3.38742146])
These YTR_pred predictions are the result of the forward pass of the entire training set through the network, using the present weights and bias terms of the model . And now we can use them to get the value of the loss function (properly speaking, an estimate of the value of the loss function):
Hide the code
loss = (1 / 2 ) * ((YTR_pred - YTR)** 2 ).mean()
loss
But if we change the size and/or values of the inputs, creating for example a new shorter array:
Hide the code
Xnew = np.random.default_rng(2024 ).normal(size = 10 ).reshape(- 1 , 1 )
Xnew
array([[ 1.02885687],
[ 1.64192004],
[ 1.14671953],
[-0.97317952],
[-1.3928001 ],
[ 0.06719636],
[ 0.86135092],
[ 0.5091868 ],
[ 1.81028557],
[ 0.75084347]])
and the associated output values:
Hide the code
Ynew = Xnew/ 2 + np.sin(Xnew) * np.cos(2 * Xnew) + 0.3 * rng.normal(size = Xnew.shape[0 ])
Then we can run them through the betwork (forward pass) and get a prediction:
Hide the code
Ynew_pred = model.predict(Xnew)
Ynew_pred[:10 ]
/Users/fernando/miniconda3/envs/MLMIC25/lib/python3.10/site-packages/sklearn/base.py:493: UserWarning: X does not have valid feature names, but StandardScaler was fitted with feature names
warnings.warn(
array([ 0.14799548, -0.2376762 , 0.05769325, 0.31106307, 0.2932709 ,
0.35958583, 0.27633267, 0.38818285, 0.04876662, 0.36026116])
Then we can get a new value of the loss:
Hide the code
loss_new = (1 / 2 ) * ((Ynew_pred - Ynew)** 2 ).mean()
loss_new
The neural network does not care if this values belong to the training set, or even about how many they are. But we can take a step further to better understand training. Below we see the 20 weights in the (fitted to training data) outer layer:
Hide the code
model["mlp" ].coefs_[2 ]
outer_weights = model["mlp" ].coefs_[2 ]
outer_weights
array([[-5.25548563e-01],
[-1.82610993e-01],
[-2.07258451e-02],
[ 5.01400632e-01],
[-2.10009181e-01],
[ 1.33174938e-02],
[-4.89055375e-01],
[ 2.01155719e-02],
[ 2.95700964e-01],
[-1.40096016e-01],
[-2.77540400e-01],
[ 1.99960980e-09],
[-3.39275348e-07],
[-3.11143961e-01],
[ 4.89647944e-01],
[ 5.74968143e-01],
[-4.48779105e-01],
[ 4.07173610e-01],
[ 1.26540951e-06],
[-4.77446082e-01]])
Let us modify them, replacing them with arbitrary uniform values:
Hide the code
model["mlp" ].coefs_[2 ] = np.random.default_rng(2024 ).uniform(low = 0 , high = 0.1 , size = 20 ).reshape(- 1 , 1 )
model["mlp" ].coefs_[2 ]
array([[0.06758313],
[0.02143232],
[0.0309452 ],
[0.07994661],
[0.09958021],
[0.01422318],
[0.00787255],
[0.01808238],
[0.03596469],
[0.01696192],
[0.05887593],
[0.06168075],
[0.01053857],
[0.05657311],
[0.00046296],
[0.04651192],
[0.09756222],
[0.07994284],
[0.05968224],
[0.03253497]])
Now we have modified the neural network itself. In fact, in this case we have untrained it (effectively ruining it). To see this, let us run again the training set through this altered neural network to get the predictions. You can compare them with the predictions of the original fitted and yet untampered network.
Hide the code
YTR_pred = model.predict(XTR)
YTR_pred[:10 ]
array([0.39286751, 0.56660871, 0.55717158, 0.47475645, 0.3529616 ,
0.34419725, 0.48586723, 0.39680396, 0.4032007 , 0.53357875])
Accordingly, the loss function value is different and much worse!
Hide the code
loss = (1 / 2 ) * ((YTR_pred - YTR)** 2 ).mean()
loss
Before we continue let us restore the model to its pristine fitted state:
Hide the code
model["mlp" ].coefs_[2 ] = outer_weights
It is critically important to understand this examples: the loss function depends on the weights and bias terms. And we can try to train the network by modifying those weights and biases to get lower values of loss for the training data.
Differently: when you look at the previous expressions for the loss you see \(y\) and \(\hat y\) . The \(y\) values belong to the dataset and there is nothing trainable about them. But the \(\hat y\) are the result of the forward pass of the training data through the neural network. And that means that they are computed using all the weights and bias terms (parameters). Weights and biases are the trainable components of the network.
Let us use the models that we have fitted to illustrate the role played by the loss function. For example, these are the graphs of the values of the loss functions for those models as the training of the neural network proceeds:
Hide the code
with warnings.catch_warnings():
warnings.simplefilter("ignore" )
fig, axs = plt.subplots(1 , 3 , figsize= (16 , 3 ))
axs[0 ].set_title("Binary Classification (make_circles) Loss curve" )
sns.lineplot(make_circles_model['mlp_binary' ].loss_curve_, ax= axs[0 ])
axs[1 ].set_title("Multilevel Classification (iris) Loss curve" )
sns.lineplot(iris_model['mlp_multi' ].loss_curve_, ax= axs[1 ])
axs[2 ].set_title("Regression Loss curve" )
sns.lineplot(mlp_reg_model['mlp_reg' ]['mlp' ].loss_curve_, ax= axs[2 ])
The three pictures illustrate that the fitting process proceeds by searching for smaller values of the loss.
Keep in mind that our goal when working with loss functions is to get the minimum value, as they are error functions. However, we need to fight overfitting while doing this. A large neural network (in terms of trainable paremeters) is a very flexible model. That is why these loss functions are often combined with regularization techniques similar to the Ridge regression and Lasso that we have seen for linear models. For example the MLPClassifier and MLPRegressor of sklearn both use an argument alpha that controls the regularization using a Ridge-like term added to the loss function.
Hide the code
alphas = [10 ** k for k in range (- 5 , 5 , 2 )]
loss_curves = []
test_scores = []
for i in range (len (alphas)):
mlp_reg_batch = MLPRegressor(hidden_layer_sizes= (80 , 20 ), alpha= alphas[i],
activation= "relu" , random_state= 2024 , max_iter= 2000 )
reg_scaler_batch = StandardScaler()
reg_scaler_batch.set_output(transform= "pandas" )
mlp_reg_batch_pipe = Pipeline(steps= [('scaler' , reg_scaler_batch),
('mlp' , mlp_reg_batch)])
mlp_reg_batch_pipe.fit(XTR, YTR)
test_scores.append(mlp_reg_batch_pipe.score(XTS, YTS))
model_batch = mlp_reg_batch_pipe
loss_curves.append(model_batch['mlp' ].loss_curve_)
Hide the code
with warnings.catch_warnings():
warnings.simplefilter("ignore" )
fig, axs = plt.subplots(len (alphas), 1 , figsize= (6 , 12 ))
for i in range (len (alphas)):
sns.lineplot(loss_curves[i], ax= axs[i])
axs[i].set_title(f"""Loss curve for regularization alpha = { alphas[i]} \n
Final loss { np. round (loss_curves[i][- 1 ], 3 )} ,
Test Score { np. round (test_scores[i], 3 )} """ )
fig.tight_layout()
Dropout is another form of regularization in which we randomly disconnect some neurons on one layer when computing the activations. This is similar in spirit to what we did in random forests: restricting the information available for the model helps prevent underfitting and forces the model to focus on the overall signal in the data instead of getting lost in details.
This dropout method is only used during training. When making predictions, all neurons in the network and their associated weights are used. Dropout is often implemented as an extra layer placed in between orinary layers, that takes care of this switching off of some neurons.
In a typical first year Calculus students learn that if a function \(f\) is differentiable, then the graph of the function is steepest in the direction of the the gradient. Therefore, if you are aiming for smaller values of \(f\) , eventually reaching a minimum, then a seemingly sensible idea is to move in the opposite direction of the gradient.
This can be made into an iterative process in discrete steps as follows: 1. If you are located at point \(\bar p\) , find the gradient at that point \(\nabla f(\bar p)\) . We will explain below how to do this using backpropagation, 2. Take a step of a certain length \(\gamma\) in the direction that is opposite to the gradient. That is, your new position is \[\bar q = \bar p - \gamma\nabla f(\bar p)\] 3. Repeat the operation at \(\bar q\) : find \(\nabla f(\bar q)\) , use it to take a \(\gamma\) sized step, and so on.
The \(\gamma\) parameter is called the learning rate .
This iterative process is the basic idea of Gradient descent numerical optimization algorithms. You can visualize and experiment with the idea using this construction by Ben Frederiksson , which also contains visualizations for some other optimization methods. Or you can use our own GeoGebra visualization .
By playing with this visualizations you will discover that the choice of the learning parameter is critical: a too small value and the method may never converge. A too large value and the method may overshoot the minimum, possibly not converging either. There are further difficulties: the initial point of the oterations plays an important part in the success or failure of the method. Besides, if the function has several local minimum values the method may converge to one such local minimum and fail to find the global minimum value.
The appeal of gradient descent is obvious, however, and it is based in a very simple yet powerful idea: the only critical skill it requires is the ability to compute the gradient of the function wherever you are. And by “wherever you are” we mean a point in the space of parameters (weights and biases) of the network.
We have seen before how running the training set through the forward pass of the network will give us the value of the loss function. That loss value depends, as we keep repeating, on the current selection of weights and biases for the network. The entire selection of weights and biases is the \(\bar p\) in the description of gradient descent. Assume for a moment that we already know how to compute the gradient \(\nabla\cal L(\bar p)\) . Then we apply the equation in the second step of gradient descent to change each weight and bias in the network. Then we run the training set through the network again, get a new gradient and so on. That is the essence of how we train the network.
The above discussion assumes that we use the whole training set in each iteration of gradient descent. But we have also seen that we can use any subset of the input data to get an estimate of the value of the loss function. And we will see that below that we can also use that smaller part of the data to get an estimate of the gradient \(\nabla\cal L(\bar p)\) . Of course, an estimate based in a smaller data sample is expected to be a lower quality estimate. So, why would we do that?
First, because selecting a smaller subset speeds up computation. Second, because using a theoretically worse version of the gradient has empirically proved to help the training process. Possibly by escaping local minima that could act like a trap for the gradient descent iterations.
This method of using smaller subsets of the training data is called stochastic gradient descent (because each gradient we compute will be a random or stochastic estimate of the gradient for the whole training set). And it is organized as follows: 1. We shuffle (randomly permute) the training data. 2. We divide the shuffled data into batches (or minibatches , depending on the authors). They are usually sized as a power of 2 to benefit from the computing architecture of our computers. This implies that there may be a final batch whose size is smaller. For example, with 100 trainig data and 16-sized minibatches you will end up with 6 full sized minibatches and a last one with only 4 elements, since \(6\cdot 16 + 4 = 100\) . 3. Each minibatch is used in turn to make a forward pass, compute the loss gradient with backpropagation and uodate the network parameters. The result of this set of operations for all minibatches is called an epoch of training. 4. Then we start a new epoch and repet the above three steps. Training proceeds this way until some stopping criterion is met, concerning the changes on the loss function or a predefined maximal number of epochs.
In scikit-learn we can control this with the batch_size argument of both MLPClassifier and MLPRegressor.
Hide the code
batch_sizes = [2 , 64 , 256 , XTR.shape[0 ]]
loss_curves = []
for i in range (len (batch_sizes)):
mlp_reg_batch = MLPRegressor(hidden_layer_sizes= (80 , 20 ), batch_size= batch_sizes[i],
activation= "relu" , random_state= 2024 , max_iter= 2000 )
reg_scaler_batch = StandardScaler()
reg_scaler_batch.set_output(transform= "pandas" )
mlp_reg_batch_pipe = Pipeline(steps= [('scaler' , reg_scaler_batch),
('mlp' , mlp_reg_batch)])
mlp_reg_batch_pipe.fit(XTR, YTR)
model_batch = mlp_reg_batch_pipe
loss_curves.append(model_batch['mlp' ].loss_curve_)
Hide the code
with warnings.catch_warnings():
warnings.simplefilter("ignore" )
fig, axs = plt.subplots(1 , len (batch_sizes), figsize= (16 , 3 ))
for i in range (len (batch_sizes)):
sns.lineplot(loss_curves[i], ax= axs[i])
axs[i].set_title(f"Loss curve for batch size { batch_sizes[i]} \n Final loss { np. round (loss_curves[i][- 1 ], 3 )} " )
The results show that, all other things equal, using the whole training set (last loss curve) is not a good idea. Apparently it fails to reach the minimum. Note also that the first curve is much noisier , due to the fact that the estimates of the gradient are much more unstable.
Choosing the learning rate is often a critical choice when training a neural network. Recall that the learning rate controls the size of the step we take when performing gradient descent. On the one hand, if we take too large a step we risk jumping over the minimums or bouncing endlessly without finding a good fit of the model. If, on the other hand, we choose a step size too small, we risk getting stuck in a plateau of the loss function and not reaching a minimum.
A relate idea is that of momentum . This physically inspired method adds a gradient memory to the gradient descen strategy. That means that, to take the next step, instead of using only the latest value of the gradient we add a fraction of the previous gradient. This has been shown to improve the convergence of the method to better (smaller) loss values.
Let us use the example of a neural network for regression that we have trained and see the impact of different learning rates. First we turn off momentum entirely.
Technical note: we have switched to solver = 'sgd' so that we can keep a constant value of the learning rate during the whole training process. We will return to this below.
Hide the code
learning_rates = [0.00001 , 0.005 , 0.01 , 0.05 , 0.1 , 0.5 ]
loss_curves = []
train_scores = []
for i in range (len (learning_rates)):
# print(learning_rates[i])
mlp_reg_batch = MLPRegressor(hidden_layer_sizes= (80 , 20 ), learning_rate= 'constant' , solver= 'sgd' ,
learning_rate_init= learning_rates[i], momentum = 0 , activation= "relu" , random_state= 2024 , max_iter= 2000 )
reg_scaler_batch = StandardScaler()
reg_scaler_batch.set_output(transform= "pandas" )
mlp_reg_batch_pipe = Pipeline(steps= [('scaler' , reg_scaler_batch),
('mlp' , mlp_reg_batch)])
mlp_reg_batch_pipe.fit(XTR, YTR)
train_scores.append(cross_val_score(mlp_reg_batch_pipe, XTR, YTR, scoring= 'neg_mean_squared_error' ).mean())
model_batch = mlp_reg_batch_pipe
loss_curves.append(model_batch['mlp' ].loss_curve_)
/Users/fernando/miniconda3/envs/MLMIC25/lib/python3.10/site-packages/sklearn/neural_network/_multilayer_perceptron.py:690: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (2000) reached and the optimization hasn't converged yet.
warnings.warn(
/Users/fernando/miniconda3/envs/MLMIC25/lib/python3.10/site-packages/sklearn/neural_network/_multilayer_perceptron.py:690: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (2000) reached and the optimization hasn't converged yet.
warnings.warn(
/Users/fernando/miniconda3/envs/MLMIC25/lib/python3.10/site-packages/sklearn/neural_network/_multilayer_perceptron.py:690: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (2000) reached and the optimization hasn't converged yet.
warnings.warn(
/Users/fernando/miniconda3/envs/MLMIC25/lib/python3.10/site-packages/sklearn/neural_network/_multilayer_perceptron.py:690: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (2000) reached and the optimization hasn't converged yet.
warnings.warn(
/Users/fernando/miniconda3/envs/MLMIC25/lib/python3.10/site-packages/sklearn/neural_network/_multilayer_perceptron.py:690: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (2000) reached and the optimization hasn't converged yet.
warnings.warn(
/Users/fernando/miniconda3/envs/MLMIC25/lib/python3.10/site-packages/sklearn/neural_network/_multilayer_perceptron.py:690: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (2000) reached and the optimization hasn't converged yet.
warnings.warn(
Now let us see the loss curves and validation scores of the different learning rates.
Hide the code
with warnings.catch_warnings():
warnings.simplefilter("ignore" )
fig, axs = plt.subplots(len (learning_rates), 1 , figsize= (6 , 12 ))
for i in range (len (learning_rates)):
sns.lineplot(loss_curves[i], ax= axs[i])
axs[i].set_title(f"""Loss curve for learning rate = { learning_rates[i]} \n
Final loss { np. round (loss_curves[i][- 1 ], 3 )} ,
Training Score (negMSE) { np. round (train_scores[i], 3 )} """ )
fig.tight_layout()
The result of the experiment confirms what we said before. The smallest learning rate value gives a very bad result which fails to find any interesting loss value after 2000 iterations. We are simply moving too slow. At the other side of the experiment, the largest learning rate results in a bouncing loss function that does not provide a good fit. Meanwhile, some intermediate values of the learning rate seem to be doing much better.
Let us now bring back momentum; it is on by default, so we only need to remove the momentum=0 option and rerun the code.
Hide the code
learning_rates = [0.00001 , 0.005 , 0.01 , 0.05 , 0.1 , 0.5 ]
loss_curves = []
train_scores = []
for i in range (len (learning_rates)):
# print(learning_rates[i])
mlp_reg_batch = MLPRegressor(hidden_layer_sizes= (80 , 20 ), learning_rate= 'constant' , solver= 'sgd' ,
learning_rate_init= learning_rates[i], activation= "relu" , random_state= 2024 , max_iter= 2000 )
reg_scaler_batch = StandardScaler()
reg_scaler_batch.set_output(transform= "pandas" )
mlp_reg_batch_pipe = Pipeline(steps= [('scaler' , reg_scaler_batch),
('mlp' , mlp_reg_batch)])
mlp_reg_batch_pipe.fit(XTR, YTR)
train_scores.append(cross_val_score(mlp_reg_batch_pipe, XTR, YTR, scoring= 'neg_mean_squared_error' ).mean())
model_batch = mlp_reg_batch_pipe
loss_curves.append(model_batch['mlp' ].loss_curve_)
Hide the code
with warnings.catch_warnings():
warnings.simplefilter("ignore" )
fig, axs = plt.subplots(len (learning_rates), 1 , figsize= (6 , 12 ))
for i in range (len (learning_rates)):
sns.lineplot(loss_curves[i], ax= axs[i])
axs[i].set_title(f"""Loss curve for learning rate = { learning_rates[i]} \n
Final loss { np. round (loss_curves[i][- 1 ], 3 )} ,
Training Score (negMSE) { np. round (train_scores[i], 3 )} """ )
fig.tight_layout()
The impact of momentum is a global improvement of the behavior across all learning rates . And the ordering is preserved in this case, the best learning rates are still similar, but the convergence of the method improves and we get better final values of the loss.
The term optimizers describes a set of tools designed to speed up learning by improving the efficiency of gradient descent (see Chapter 15 of (Glassner 2021 ) ). For example, instead of keeping a constant learning rate, we can use a learning rate schedule method . A natural idea is to take longer steps at the beginning, trying to move swiftly across the possible plateaus in the loss function, and then slowing down as we may come closer to a minimum.
This startegy can be easily accomplished by usinby selecting a smaller than 1 decay rate \(s\) and then multiplying the learning rate \(\lambda\) after each step by this decay rate. That means that instead of a constant learning rates now we are using an exponentially decaying sequence of (decreasing to zero) learning rates: \[
\lambda, s\,\lambda, s^2\,\lambda,\ldots, s^k\,\lambda,\ldots \rightarrow 0
\]
The idea behind this is illustrated by this Figure 15-16 from (Glassner 2021 ) .
There are many possible variants vased on this basic idea. We can apply the learning rate update after each epoch,instead of after every gradient descent step. Or we can monitor the loss and change the learing rate using that information (e. g. slowing down if we detetct that bouncing ). The idea of momentum can be considered as well to be an optimizer. There are indeed improved version of momentum, notably Nesterov’s momentum : it combines momentum the past with anticipated gradient values from the future to obtain a corrected update of the gradient that has been shown to perform usually better than simple momentum. In scikit-learn you can use it when you (manually) select stochastig gradient descent.
Another family of optimizer techniques adapt the learning rate to each individual parameter (weights and bias) of the network. The most popular of these adaptive methods is called Adam (from Adaptive Moment Estimation). You may think of the Adam and related methods as finding a decay rate taylored for each individual parameter, which takes into account the history of changes that the parameter has undergone, ranking them by how recent they are. The MLP functions of scikit-learn use Adam and by default to implement gradient descent.
If gradient descent is the engine of neural networks, backpropagation (backprop for short) provides the fuel. That is, the gradient. And it is essentially a clever application of the chain rule from Calculus and some dynamic programming ideas to optimize the computation and avoiding repeatedly computing the same quantity.
Here we will focus on the chain rule part of backprop, which is conceptually the cornerstone of the algorithm. To illustrate the idea we consider the neural network in our regression example, pictured below. Recall that the loss function \(\cal L\) is a function of the weights and bias terms of the network. And let us suppose that we want to compute the following component of the gradient: \[\dfrac{\partial \cal L}{\partial w^{(1)}_{04}}\] where \(w^{(1)}_{04}\) is the weight for the green colored edge of the graph.
To understand the notation consider the picture below. We are closely following here the notation in Chapter 10 of (James et al. 2023 ) :
The weights in the \(r-th\) hidden layer number are called \(w^{(r)}_{ij}\) . The subscripts \(i\) and \(j\) indicate the numbers of the neurons they connect.
For the special case of the input layer, \(i\) indicates the input number. Thus, the weight \(w^{(1)}_{04}\) we are considering connects the input \(X_1\) to the fourth neouron in the first hidden layer.
The bias term of the \(j-th\) neuron in the \(r-th\) hidden layer is indicated by \(w^{(r)}_{Bj}\) .
The \(j-th\) neuron in the \(r-th\) layer computes the activation \[A^{(r)}_{j} = h^{(r)}(z^{r}_{j}) = h^{(r)}\left(w^{(r)}_{Bj}
+\underbrace{w^{(r)}_{0j}A^{(r-1)}_{0j} + w^{(r)}_{1j}A^{(r-1)}_{1j} + \cdots}_{\text{ sum over all neurons of preceding layer}}
\right)\] where \(h^{(r)}\) is the common activation function for the \(r-th\) hidden layer (e.g. relu).
For the special case of the first hidden layer, the activations \(A^{(r-1)}_{kj}\) must be replaced with the \(k\) -th input value.
For the output layer we use the superscript \((out)\) . Thus \(h^{(out)}\) is the output layer activation function (usually the identity in regression), and \(w^{(out)}_{rs}\) indicates a weight in the output layer. In regression there is only a neuron in the ourput layer and so in principle the second subscript could be omitted, but we leave it there for greater generality.
What do the red and blue edges represent? In this regression problem, where the loss is \[{\cal L}_{MSE} = \dfrac{1}{2n}\sum_{i = 1}^n (\hat y_i - y_i)^2\] the predicted value \(\hat y_i\) is computed using the activations in the second hidden layer and the weights \(w^{(out)}_{rs}\) of the output layer. These weights are the rightmost group pf colored edges.
But then we must realize that \(\hat y_k\) has been obtained And each neuron of the second hidden layer is computed from activations coming from the first hidden layer. The second group of red edgex identify how the activation \(A^{(1)}_{4}\) (first hidden layer, neuron number 4) is used as input in each of the neurons of the second hidden layer. And we care about neuron number 4 because the (green edge) weight \(w^{(1)}_{04}\) is one of the inputs of that neuron (and only that neuron).
To apply the chain rule we need to think of all the composition paths that connect the loss function value to \(w^{(1)}_{04}\) . Now you can see that this is what the red edges represent. We must compute a product of derivatives along each one of thoose paths and then we add up the results. The blue colored path is a particular example of one such composition paths. Let us follow the chain of derivatives of this blue path in the backward direction of the algorithm. + We can begin by computing \[
\dfrac{\partial \cal L}{\partial \hat y_i} = \dfrac{1}{n}\sum_{i = 1}^n (\hat y_i - y_i)
\] Recall that \(\hat y_i\) is the prediction that the network outputs for input values \((X_i0, X_{i1})\) .
Now \[\hat y_i = h^{out}(z_{out}) = h^{out}\left(
w^{out}_{B} + w^{out}_{0}A^{(2)}_0 + w^{out}_{0}A^{(2)}_0 +\cdots +
\color{blue}w^{out}_{3}A^{(2)}_3\color{black} + \cdots + w^{out}_{5}A^{(2)}_5
\right)\] and so the next derivative in the blue chain is: \[\dfrac{\partial \hat y_i}{\partial A^{(2)}_3} = (h^{out})'(z_{out})\cdot w^{out}_{3}\] Now we have reached \(A^{(2)}_3\) with \[
A^{(2)}_3 = h^{2}(z^{(2)}_3) = h^{(2)}\left(
w^{(2)}_{B3} + w^{(2)}_{03}A^{(1)}_{03} + w^{(2)}_{13}A^{(1)}_{1} +\cdots +
\color{blue}w^{(2)}_{43}A^{(1)}_4\color{black} + \cdots + w^{(2)}_{73}A^{(1)}_7
\right)
\] The next derivative in our path is therefore: \[\dfrac{\partial A^{(2)}_3}{\partial A^{(1)}_4} = (h^{2})'(z^{(2)}_3)\cdot w^{(2)}_{43}\] Finally (in this really simple neural network) we reach \(A^{(1)}_4\) with \[
A^{(1)}_4 = h^{1}(z^{(1)}_4) = h^{(1)}\left(
w^{(1)}_{B4} + \color{green}w^{(1)}_{04}X_{i0}\color{black} + w^{(1)}_{14}X_{i1}
\right)
\] The green color highlights that we have reached the desired weight \(w^{(1)}_{04}\) and so the final derivatives in the chain are: \[\dfrac{\partial A^{(1)}_4}{\partial w^{(1)}_{04}} = (h^{1})'(z^{(1)}_4)\cdot X_{i0}\] Putting all the pieces toghether the product of derivatives along the blue path is: \[\dfrac{\partial \cal L}{\partial \hat y_i}\cdot(h^{out})'(z_{out})\cdot w^{out}_{3}\cdot
(h^{2})'(z^{(2)}_3)\cdot w^{(2)}_{43}\cdot (h^{1})'(z^{(1)}_4)\cdot X_{i0}\] The derivative \[\dfrac{\partial \cal L}{\partial w^{(1)}_{04}}\] is a sum of six products like this, one for each red or blue path in the picture. Remember that the forward pass has provided us with all the weights and all the affine combinations \(z^{(i)}_j\) . In particular, we know all the values that appear in the above product, and so we can compute the gradient.
The above construction for the gradient may seem intimidating at first, but it should be clear that we can use matrix computations to organize the chain rule and parallelize it during computation In many cases we find it convenient to think in terms of higher dimensional stacks of arrays , so called tensors . That is the origin of the name of one of the first and most popular deep learning libraries, tensorflow . The most widespread libraries are Tensorflow and PyTorch .
The graph below, from a video at 3Blue1Brown , conveys the idea of the backpropagation algorithm as a backward flow of tensor information used to update the gradient.
Scikeras
In this final section we will introduce the scikeras library. This library is a bridge between the scikit-learn and keras libraries. It allows us to use the keras library to define and train neural networks, using with the familiar scikit-learn interface. You should be aware that Tensorflow/Keras (and similarly PyTorch) are very powerful libraries, independent from scikit-learn. In particular they provide their own preprocessing tools, alongside with optimizers, loss functions and many classes that help define complex neural network architectures. So eventually you will probably switch to those native deep learning tools and libraries for that kinf of models. Our intention here is just to take a first step in that direction.
Again, the code below will not work unless both tensorflow and scikeras are installed. If you have already installed Tensorflow, we recommend using scikeras with this version
pip install scikeras==0.11.0
Backup your environment first, we mean it!
Hide the code
import tensorflow as tf
print ("Tensorflow version: " , tf.__version__)
print ("GPUs Available: " , tf.config.list_physical_devices('GPU' ))
import scikeras
print ("Scikeras version: " , scikeras.__version__)
Tensorflow version: 2.12.0
GPUs Available: [PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]
We make sure that we are starting a fresh work session with Keras.
Hide the code
tf.keras.backend.clear_session()
We will be using as example the same diamond dataset that we used in the previous session, and we will use the same preprocessing steps. This preliminary part of the work is performed by the code in an external script that we run here.
Hide the code
% run - i "3_3_preprocessing_pipeline.py"
Let us recall the structure of the dataset after preprocessing:
Hide the code
preproc_pipeline.fit_transform(XTR)
42836
-0.820456
0.673148
-0.887426
0.439474
-2.441728
0.0
1.0
0.0
0.0
0.0
...
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
1.0
0.0
9727
0.974411
0.949477
1.027965
-1.161257
0.826702
0.0
0.0
1.0
0.0
0.0
...
0.0
0.0
0.0
0.0
0.0
1.0
0.0
0.0
0.0
0.0
35974
-0.787677
0.620435
-0.848513
-1.079161
-0.631078
0.0
1.0
0.0
0.0
0.0
...
0.0
0.0
0.0
0.0
0.0
0.0
1.0
0.0
0.0
0.0
22501
0.725116
0.525793
0.881213
0.637591
0.826702
0.0
0.0
1.0
0.0
1.0
...
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
1.0
0.0
3571
0.214658
0.046078
0.098920
0.055139
1.239508
1.0
0.0
0.0
0.0
1.0
...
0.0
0.0
0.0
0.0
0.0
0.0
1.0
0.0
0.0
0.0
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
50057
-0.041258
0.001702
0.098920
0.941148
0.826702
0.0
0.0
0.0
1.0
0.0
...
0.0
1.0
0.0
0.0
0.0
0.0
0.0
1.0
0.0
0.0
32511
-0.919689
0.845828
-0.926669
2.107540
-0.098001
1.0
0.0
0.0
0.0
1.0
...
0.0
0.0
0.0
0.0
0.0
1.0
0.0
0.0
0.0
0.0
5192
0.394988
0.156016
0.618459
0.839162
2.368765
0.0
0.0
0.0
1.0
0.0
...
0.0
0.0
0.0
0.0
0.0
0.0
0.0
1.0
0.0
0.0
12172
0.974411
0.949477
1.131967
-0.130050
-0.631078
0.0
1.0
0.0
0.0
0.0
...
0.0
0.0
1.0
0.0
0.0
0.0
1.0
0.0
0.0
0.0
33003
-1.259743
1.586952
-1.252285
-0.829356
-0.631078
0.0
1.0
0.0
0.0
1.0
...
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
1.0
0.0
43152 rows × 22 columns
Now we load the libraries, check the versions
Hide the code
from scikeras.wrappers import KerasRegressor
from tensorflow import keras
from tensorflow.keras import layers
tf.keras.backend.clear_session()
The next code cell is where we really cross the bridge from scikit-learn to Keras. We define a Keras model using the Sequential class. This class allows us to define a neural network model as a sequence of layers. This is certainly not the only way to define a neural network in Keras, but it is the most basic one and the first yous should master. We encourage you to follow the reference (Géron 2022 ) as it offers a similar path to the one we have been following in this course: it starts with scikit-learn models and then will lead you into Keras and Tensorflow. The book by (Glassner 2021 ) and the one by [Chollet2024] are also excellent resources (Chollet is the creator of Keras).
In our session we will go over this code and explain the architecture of the network, although thanks to Keras it is almost self-explanatory.
Hide the code
def build_keras_model(meta, hidden_units= 64 , activation= 'relu' , optimizer= 'adam' , learning_rate= 1e-3 ):
# n_features_in_ = meta["n_features_in_"]
X_shape_ = meta["X_shape_" ]
# y_shape_ = meta["y_shape_"]
# preproc_out_shape = meta["preproc_out_shape"]
model = keras.Sequential([
layers.Dense(64 , activation= 'relu' , input_shape= (X_shape_[1 ],)),
# layers.Dense(hidden_units, activation=activation, input_shape=(X_train.shape[1],)),
layers.Dense(hidden_units // 2 , activation= activation),
layers.Dense(1 ) # Output layer for regression
])
# This block of code is not really essential, it is just for compatibility
keras_version = tf.keras.__version__
reference_version = "3.8.0"
from packaging.version import parse
if parse(keras_version) >= parse(reference_version):
optimizer = tf.keras.optimizers.Adam(learning_rate= learning_rate)
else :
optimizer = tf.keras.optimizers.legacy.Adam(learning_rate= learning_rate)
model.compile (optimizer= optimizer, loss= 'mse' , metrics= ['mae' ])
return model
Hide the code
keras_model = KerasRegressor(model= build_keras_model, epochs= 30 , batch_size= 32 , verbose= 1 , optimizer= 'adam' , learning_rate= 1e-3 )
We now connet the preprocessing pipeline with the (sci)Keras model.
Hide the code
keras_pipeline = Pipeline([
('preproc' , preproc_pipeline), # You can add more preprocessing steps here
('keras_model' , keras_model)
])
And the rest is ground we have already covered. We leave it as an exercise to you to explore the performance of the model, and compare it with the models in the previous session.
Hide the code
keras_pipeline.fit(XTR, YTR);
Epoch 1/30
1349/1349 [==============================] - 5s 4ms/step - loss: 1.9325 - mae: 0.6031
Epoch 2/30
1349/1349 [==============================] - 5s 4ms/step - loss: 0.0457 - mae: 0.1408
Epoch 3/30
1349/1349 [==============================] - 5s 4ms/step - loss: 0.0254 - mae: 0.1137
Epoch 4/30
1349/1349 [==============================] - 5s 4ms/step - loss: 0.0220 - mae: 0.1072
Epoch 5/30
1349/1349 [==============================] - 5s 4ms/step - loss: 0.0208 - mae: 0.1042
Epoch 6/30
1349/1349 [==============================] - 5s 4ms/step - loss: 0.0200 - mae: 0.1024
Epoch 7/30
1349/1349 [==============================] - 6s 4ms/step - loss: 0.0195 - mae: 0.1009
Epoch 8/30
1349/1349 [==============================] - 5s 4ms/step - loss: 0.0188 - mae: 0.0993
Epoch 9/30
1349/1349 [==============================] - 6s 4ms/step - loss: 0.0183 - mae: 0.0976
Epoch 10/30
1349/1349 [==============================] - 5s 4ms/step - loss: 0.0181 - mae: 0.0972
Epoch 11/30
1349/1349 [==============================] - 5s 4ms/step - loss: 0.0176 - mae: 0.0958
Epoch 12/30
1349/1349 [==============================] - 5s 4ms/step - loss: 0.0170 - mae: 0.0940
Epoch 13/30
1349/1349 [==============================] - 4s 3ms/step - loss: 0.0171 - mae: 0.0946
Epoch 14/30
1349/1349 [==============================] - 4s 3ms/step - loss: 0.0166 - mae: 0.0929
Epoch 15/30
1349/1349 [==============================] - 4s 3ms/step - loss: 0.0168 - mae: 0.0938
Epoch 16/30
1349/1349 [==============================] - 5s 3ms/step - loss: 0.0162 - mae: 0.0913
Epoch 17/30
1349/1349 [==============================] - 5s 4ms/step - loss: 0.0164 - mae: 0.0928
Epoch 18/30
1349/1349 [==============================] - 5s 3ms/step - loss: 0.0161 - mae: 0.0911
Epoch 19/30
1349/1349 [==============================] - 5s 4ms/step - loss: 0.0160 - mae: 0.0911
Epoch 20/30
1349/1349 [==============================] - 5s 3ms/step - loss: 0.0160 - mae: 0.0911
Epoch 21/30
1349/1349 [==============================] - 5s 3ms/step - loss: 0.0160 - mae: 0.0912
Epoch 22/30
1349/1349 [==============================] - 5s 3ms/step - loss: 0.0157 - mae: 0.0905
Epoch 23/30
1349/1349 [==============================] - 5s 3ms/step - loss: 0.0155 - mae: 0.0895
Epoch 24/30
1349/1349 [==============================] - 5s 3ms/step - loss: 0.0155 - mae: 0.0893
Epoch 25/30
1349/1349 [==============================] - 5s 4ms/step - loss: 0.0157 - mae: 0.0901
Epoch 26/30
1349/1349 [==============================] - 5s 4ms/step - loss: 0.0152 - mae: 0.0887
Epoch 27/30
1349/1349 [==============================] - 5s 4ms/step - loss: 0.0155 - mae: 0.0900
Epoch 28/30
1349/1349 [==============================] - 5s 3ms/step - loss: 0.0151 - mae: 0.0880
Epoch 29/30
1349/1349 [==============================] - 5s 3ms/step - loss: 0.0151 - mae: 0.0884
Epoch 30/30
1349/1349 [==============================] - 5s 3ms/step - loss: 0.0150 - mae: 0.0881
The score of the model in the test set gives you a hint of the performance of the model.
Hide the code
root_mean_squared_error(keras_pipeline.predict( XTS), YTS)
338/338 [==============================] - 0s 1ms/step
And the following plot assesses the agreement between the predicted and observed values of the target variable.
Hide the code
sns.scatterplot(x= YTS, y= keras_pipeline.predict(XTS))
338/338 [==============================] - 0s 1ms/step
However, to be fair with this model we should think about the hyperparameters that define it and how to find a good combination. That is such a broad topic that we can hardly scratch the surface here. But we can at least show you how to use the GridSearchCV tool from scikit-learn to search for the best hyperparameter in one of the defining characteristics of this model architecture: the learning rate. Keep in mind that even using a GPU this grid search took me over 35 minutes
Hide the code
cv = KFold(n_splits= 10 , shuffle= True , random_state= 2025 )
hyp_grid = {
# 'keras_model__model__hidden_units': [64, 128], # Number of neurons in the first layer
# 'keras_model__model__activation': ['relu', 'tanh'], # Activation function
'keras_model__model__learning_rate' : [1e-4 , 1e-3 , 1e-2 , 1e-1 ], # Learning rate
# 'keras_model__model__optimizer': ['adam', 'rmsprop'], # Optimizer
# 'keras_model__epochs': [50, 100], # Number of epochs
# 'keras_model__batch_size': [16, 32] # Batch size
}
keras_gridCV = GridSearchCV(keras_pipeline,
hyp_grid,
scoring= 'neg_mean_squared_error' ,
cv= cv,
verbose= 2 , n_jobs=- 1 )
keras_gridCV.fit(XTR, YTR)
Fitting 10 folds for each of 4 candidates, totalling 40 fits
Metal device set to: Apple M2
systemMemory: 16.00 GB
maxCacheSize: 5.33 GB
Metal device set to: Apple M2
systemMemory: 16.00 GB
maxCacheSize: 5.33 GB
Metal device set to: Apple M2
systemMemory: 16.00 GB
maxCacheSize: 5.33 GB
Metal device set to: Apple M2
systemMemory: 16.00 GB
maxCacheSize: 5.33 GB
Metal device set to: Apple M2
systemMemory: 16.00 GB
maxCacheSize: 5.33 GB
Metal device set to: Apple M2
systemMemory: 16.00 GB
maxCacheSize: 5.33 GB
Metal device set to: Apple M2
systemMemory: 16.00 GB
maxCacheSize: 5.33 GB
Metal device set to: Apple M2
systemMemory: 16.00 GB
maxCacheSize: 5.33 GB
2025-03-11 11:30:48.466037: W tensorflow/tsl/platform/profile_utils/cpu_utils.cc:128] Failed to get CPU frequency: 0 Hz
2025-03-11 11:30:48.474072: W tensorflow/tsl/platform/profile_utils/cpu_utils.cc:128] Failed to get CPU frequency: 0 Hz
2025-03-11 11:30:48.483551: W tensorflow/tsl/platform/profile_utils/cpu_utils.cc:128] Failed to get CPU frequency: 0 Hz
2025-03-11 11:30:48.484380: W tensorflow/tsl/platform/profile_utils/cpu_utils.cc:128] Failed to get CPU frequency: 0 Hz
2025-03-11 11:30:48.486083: W tensorflow/tsl/platform/profile_utils/cpu_utils.cc:128] Failed to get CPU frequency: 0 Hz
2025-03-11 11:30:48.488258: W tensorflow/tsl/platform/profile_utils/cpu_utils.cc:128] Failed to get CPU frequency: 0 Hz
2025-03-11 11:30:48.493883: W tensorflow/tsl/platform/profile_utils/cpu_utils.cc:128] Failed to get CPU frequency: 0 Hz
2025-03-11 11:30:48.500436: W tensorflow/tsl/platform/profile_utils/cpu_utils.cc:128] Failed to get CPU frequency: 0 Hz
Epoch 1/30
Epoch 1/30
Epoch 1/30
Epoch 1/30
Epoch 1/30
Epoch 1/30
Epoch 1/30
Epoch 1/30
1214/1214 [==============================] - 12s 10ms/step - loss: 14.6663 - mae: 3.0001
1213/1214 [============================>.] - ETA: 0s - loss: 17.7944 - mae: 3.3493Epoch 2/30
1214/1214 [==============================] - 12s 10ms/step - loss: 16.3046 - mae: 3.2020
Epoch 2/30
1214/1214 [==============================] - 12s 10ms/step - loss: 15.4089 - mae: 2.9867
Epoch 2/30
1214/1214 [==============================] - 12s 10ms/step - loss: 17.7855 - mae: 3.3480
7/1214 [..............................] - ETA: 10s - loss: 1.6999 - mae: 1.0611Epoch 2/30
1214/1214 [==============================] - 12s 10ms/step - loss: 15.0262 - mae: 2.9755
13/1214 [..............................] - ETA: 10s - loss: 0.9931 - mae: 0.8026Epoch 2/30
1214/1214 [==============================] - 13s 10ms/step - loss: 15.0740 - mae: 2.8960
Epoch 2/30
1214/1214 [==============================] - 13s 10ms/step - loss: 15.7093 - mae: 2.9824
Epoch 2/30
1214/1214 [==============================] - 13s 10ms/step - loss: 17.6130 - mae: 3.2792
Epoch 2/30
1214/1214 [==============================] - 11s 9ms/step - loss: 0.7739 - mae: 0.6535
Epoch 3/30
1214/1214 [==============================] - 12s 9ms/step - loss: 0.9492 - mae: 0.7357
Epoch 3/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.8232 - mae: 0.6775
Epoch 3/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.6272 - mae: 0.5934
Epoch 3/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.6667 - mae: 0.6134
29/1214 [..............................] - ETA: 11s - loss: 0.4417 - mae: 0.4933Epoch 3/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.6062 - mae: 0.5842
1201/1214 [============================>.] - ETA: 0s - loss: 0.5556 - mae: 0.5490Epoch 3/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.5549 - mae: 0.5481
14/1214 [..............................] - ETA: 10s - loss: 0.4449 - mae: 0.4714Epoch 3/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.7675 - mae: 0.6552
Epoch 3/30
1214/1214 [==============================] - 11s 9ms/step - loss: 0.3539 - mae: 0.4181
1185/1214 [============================>.] - ETA: 0s - loss: 0.4050 - mae: 0.4527Epoch 4/30
1214/1214 [==============================] - 12s 9ms/step - loss: 0.4039 - mae: 0.4516
1141/1214 [===========================>..] - ETA: 0s - loss: 0.3694 - mae: 0.4348Epoch 4/30
1214/1214 [==============================] - 12s 9ms/step - loss: 0.4004 - mae: 0.4485
1169/1214 [===========================>..] - ETA: 0s - loss: 0.3307 - mae: 0.4014Epoch 4/30
1214/1214 [==============================] - 12s 9ms/step - loss: 0.3405 - mae: 0.4234
Epoch 4/30
1214/1214 [==============================] - 11s 9ms/step - loss: 0.3493 - mae: 0.4189
Epoch 4/30
1214/1214 [==============================] - 11s 9ms/step - loss: 0.3455 - mae: 0.4182
Epoch 4/30
1214/1214 [==============================] - 12s 9ms/step - loss: 0.3294 - mae: 0.4002
43/1214 [>.............................] - ETA: 12s - loss: 0.3213 - mae: 0.3964Epoch 4/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.3648 - mae: 0.4316
56/1214 [>.............................] - ETA: 11s - loss: 0.2426 - mae: 0.3543Epoch 4/30
1214/1214 [==============================] - 11s 9ms/step - loss: 0.2213 - mae: 0.3182
1173/1214 [===========================>..] - ETA: 0s - loss: 0.2109 - mae: 0.3228Epoch 5/30
1214/1214 [==============================] - 11s 9ms/step - loss: 0.2574 - mae: 0.3496
1140/1214 [===========================>..] - ETA: 0s - loss: 0.2284 - mae: 0.3208Epoch 5/30
1214/1214 [==============================] - 11s 9ms/step - loss: 0.2091 - mae: 0.3212
1175/1214 [============================>.] - ETA: 0s - loss: 0.2239 - mae: 0.3217Epoch 5/30
1214/1214 [==============================] - 11s 9ms/step - loss: 0.2320 - mae: 0.3249
Epoch 5/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.2613 - mae: 0.3501
Epoch 5/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.2220 - mae: 0.3203
14/1214 [..............................] - ETA: 10s - loss: 0.1494 - mae: 0.2709Epoch 5/30
1214/1214 [==============================] - 11s 9ms/step - loss: 0.2270 - mae: 0.3191
Epoch 5/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.2196 - mae: 0.3226
Epoch 5/30
1214/1214 [==============================] - 11s 9ms/step - loss: 0.1433 - mae: 0.2431
1019/1214 [========================>.....] - ETA: 1s - loss: 0.1472 - mae: 0.2493Epoch 6/30
1214/1214 [==============================] - 11s 9ms/step - loss: 0.1384 - mae: 0.2474
Epoch 6/30
1214/1214 [==============================] - 11s 9ms/step - loss: 0.1633 - mae: 0.2589
Epoch 6/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.1710 - mae: 0.2709
Epoch 6/30
1214/1214 [==============================] - 11s 9ms/step - loss: 0.1732 - mae: 0.2686
Epoch 6/30
1214/1214 [==============================] - 11s 9ms/step - loss: 0.1485 - mae: 0.2480
Epoch 6/30
1214/1214 [==============================] - 11s 9ms/step - loss: 0.1572 - mae: 0.2535
Epoch 6/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.1445 - mae: 0.2458
153/1214 [==>...........................] - ETA: 10s - loss: 0.1136 - mae: 0.2190Epoch 6/30
1214/1214 [==============================] - 11s 9ms/step - loss: 0.0981 - mae: 0.1965
Epoch 7/30
1214/1214 [==============================] - 11s 9ms/step - loss: 0.1183 - mae: 0.2133
Epoch 7/30
1214/1214 [==============================] - 11s 9ms/step - loss: 0.0992 - mae: 0.2019
1155/1214 [===========================>..] - ETA: 0s - loss: 0.1055 - mae: 0.2039Epoch 7/30
1214/1214 [==============================] - 11s 9ms/step - loss: 0.1214 - mae: 0.2165
Epoch 7/30
1214/1214 [==============================] - 11s 9ms/step - loss: 0.1047 - mae: 0.2030
Epoch 7/30
1214/1214 [==============================] - 11s 9ms/step - loss: 0.1194 - mae: 0.2133
Epoch 7/30
1214/1214 [==============================] - 11s 9ms/step - loss: 0.1101 - mae: 0.2080
12/1214 [..............................] - ETA: 11s - loss: 0.0694 - mae: 0.1842Epoch 7/30
1214/1214 [==============================] - 11s 9ms/step - loss: 0.1046 - mae: 0.2017
71/1214 [>.............................] - ETA: 10s - loss: 0.1107 - mae: 0.2006Epoch 7/30
1214/1214 [==============================] - 11s 9ms/step - loss: 0.0702 - mae: 0.1694
Epoch 8/30
1214/1214 [==============================] - 11s 9ms/step - loss: 0.0864 - mae: 0.1832
Epoch 8/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0733 - mae: 0.1730
Epoch 8/30
1214/1214 [==============================] - 11s 9ms/step - loss: 0.0885 - mae: 0.1835
Epoch 8/30
1214/1214 [==============================] - 11s 9ms/step - loss: 0.0756 - mae: 0.1753
Epoch 8/30
1214/1214 [==============================] - 11s 9ms/step - loss: 0.0842 - mae: 0.1798
Epoch 8/30
1214/1214 [==============================] - 11s 9ms/step - loss: 0.0779 - mae: 0.1767
Epoch 8/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0779 - mae: 0.1744
Epoch 8/30
1214/1214 [==============================] - 11s 9ms/step - loss: 0.0640 - mae: 0.1618
Epoch 9/30
1214/1214 [==============================] - 11s 9ms/step - loss: 0.0538 - mae: 0.1519
1/1214 [..............................] - ETA: 12s - loss: 0.1290 - mae: 0.2167Epoch 9/30
1214/1214 [==============================] - 11s 9ms/step - loss: 0.0557 - mae: 0.1532
Epoch 9/30
1214/1214 [==============================] - 11s 9ms/step - loss: 0.0566 - mae: 0.1554
Epoch 9/30
1214/1214 [==============================] - 11s 9ms/step - loss: 0.0651 - mae: 0.1610
Epoch 9/30
1214/1214 [==============================] - 11s 9ms/step - loss: 0.0612 - mae: 0.1576
Epoch 9/30
1214/1214 [==============================] - 11s 9ms/step - loss: 0.0571 - mae: 0.1544
Epoch 9/30
1214/1214 [==============================] - 11s 9ms/step - loss: 0.0594 - mae: 0.1554
Epoch 9/30
1214/1214 [==============================] - 11s 9ms/step - loss: 0.0493 - mae: 0.1462
1073/1214 [=========================>....] - ETA: 1s - loss: 0.0479 - mae: 0.1426Epoch 10/30
1214/1214 [==============================] - 11s 9ms/step - loss: 0.0439 - mae: 0.1398
1018/1214 [========================>.....] - ETA: 1s - loss: 0.0482 - mae: 0.1429Epoch 10/30
1214/1214 [==============================] - 11s 9ms/step - loss: 0.0449 - mae: 0.1415
Epoch 10/30
1214/1214 [==============================] - 11s 9ms/step - loss: 0.0440 - mae: 0.1391
1213/1214 [============================>.] - ETA: 0s - loss: 0.0500 - mae: 0.1450Epoch 10/30
1214/1214 [==============================] - 11s 9ms/step - loss: 0.0500 - mae: 0.1450
Epoch 10/30
1214/1214 [==============================] - 11s 9ms/step - loss: 0.0469 - mae: 0.1421
Epoch 10/30
1214/1214 [==============================] - 11s 9ms/step - loss: 0.0442 - mae: 0.1389
Epoch 10/30
1214/1214 [==============================] - 11s 9ms/step - loss: 0.0470 - mae: 0.1416
Epoch 10/30
1214/1214 [==============================] - 11s 9ms/step - loss: 0.0398 - mae: 0.1356
Epoch 11/30
1214/1214 [==============================] - 11s 9ms/step - loss: 0.0372 - mae: 0.1307
1093/1214 [==========================>...] - ETA: 1s - loss: 0.0385 - mae: 0.1320Epoch 11/30
1214/1214 [==============================] - 11s 9ms/step - loss: 0.0366 - mae: 0.1298
105/1214 [=>............................] - ETA: 10s - loss: 0.0424 - mae: 0.1297Epoch 11/30
1214/1214 [==============================] - 11s 9ms/step - loss: 0.0371 - mae: 0.1313
Epoch 11/30
1214/1214 [==============================] - 11s 9ms/step - loss: 0.0382 - mae: 0.1318
Epoch 11/30
1214/1214 [==============================] - 11s 9ms/step - loss: 0.0403 - mae: 0.1344
1145/1214 [===========================>..] - ETA: 0s - loss: 0.0392 - mae: 0.1319Epoch 11/30
1214/1214 [==============================] - 11s 9ms/step - loss: 0.0362 - mae: 0.1286
45/1214 [>.............................] - ETA: 11s - loss: 0.0511 - mae: 0.1357Epoch 11/30
1214/1214 [==============================] - 11s 9ms/step - loss: 0.0387 - mae: 0.1315
Epoch 11/30
1214/1214 [==============================] - 11s 9ms/step - loss: 0.0337 - mae: 0.1275
Epoch 12/30
1214/1214 [==============================] - 11s 9ms/step - loss: 0.0324 - mae: 0.1240
Epoch 12/30
1214/1214 [==============================] - 11s 9ms/step - loss: 0.0321 - mae: 0.1240
Epoch 12/30
1214/1214 [==============================] - 11s 9ms/step - loss: 0.0316 - mae: 0.1225
1180/1214 [============================>.] - ETA: 0s - loss: 0.0310 - mae: 0.1216Epoch 12/30
1214/1214 [==============================] - 11s 9ms/step - loss: 0.0326 - mae: 0.1246
1214/1214 [==============================] - 11s 9ms/step - loss: 0.0340 - mae: 0.1265
Epoch 12/30
Epoch 12/30
1214/1214 [==============================] - 11s 9ms/step - loss: 0.0310 - mae: 0.1214
Epoch 12/30
1214/1214 [==============================] - 11s 9ms/step - loss: 0.0330 - mae: 0.1242
Epoch 12/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0296 - mae: 0.1213
Epoch 13/30
1214/1214 [==============================] - 13s 10ms/step - loss: 0.0289 - mae: 0.1185
1082/1214 [=========================>....] - ETA: 1s - loss: 0.0303 - mae: 0.1210Epoch 13/30
1214/1214 [==============================] - 13s 10ms/step - loss: 0.0288 - mae: 0.1188
Epoch 13/30
1214/1214 [==============================] - 13s 10ms/step - loss: 0.0281 - mae: 0.1168
Epoch 13/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0298 - mae: 0.1206
Epoch 13/30
1214/1214 [==============================] - 13s 10ms/step - loss: 0.0276 - mae: 0.1160
Epoch 13/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0287 - mae: 0.1185
Epoch 13/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0291 - mae: 0.1181
Epoch 13/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0268 - mae: 0.1164
Epoch 14/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0264 - mae: 0.1140
Epoch 14/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0264 - mae: 0.1141
Epoch 14/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0258 - mae: 0.1127
Epoch 14/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0253 - mae: 0.1119
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0262 - mae: 0.1143
139/1214 [==>...........................] - ETA: 10s - loss: 0.0282 - mae: 0.1151Epoch 14/30
Epoch 14/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0268 - mae: 0.1156
1/1214 [..............................] - ETA: 11s - loss: 0.0359 - mae: 0.1286Epoch 14/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0265 - mae: 0.1138
Epoch 14/30
1214/1214 [==============================] - 11s 9ms/step - loss: 0.0245 - mae: 0.1107
Epoch 15/30
1214/1214 [==============================] - 11s 9ms/step - loss: 0.0248 - mae: 0.1127
937/1214 [======================>.......] - ETA: 2s - loss: 0.0254 - mae: 0.1106Epoch 15/30
1214/1214 [==============================] - 11s 9ms/step - loss: 0.0247 - mae: 0.1110
Epoch 15/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0242 - mae: 0.1097
Epoch 15/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0248 - mae: 0.1119
72/1214 [>.............................] - ETA: 11s - loss: 0.0256 - mae: 0.1118Epoch 15/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0244 - mae: 0.1108
52/1214 [>.............................] - ETA: 11s - loss: 0.0202 - mae: 0.1075Epoch 15/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0237 - mae: 0.1085
Epoch 15/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0245 - mae: 0.1100
Epoch 15/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0233 - mae: 0.1094
1194/1214 [============================>.] - ETA: 0s - loss: 0.0231 - mae: 0.1076Epoch 16/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0230 - mae: 0.1075
Epoch 16/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0234 - mae: 0.1083
Epoch 16/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0230 - mae: 0.1070
1146/1214 [===========================>..] - ETA: 0s - loss: 0.0225 - mae: 0.1062Epoch 16/30
1214/1214 [==============================] - 14s 12ms/step - loss: 0.0232 - mae: 0.1083
Epoch 16/30
1214/1214 [==============================] - 14s 12ms/step - loss: 0.0233 - mae: 0.1090
56/1214 [>.............................] - ETA: 14s - loss: 0.0234 - mae: 0.1064Epoch 16/30
1214/1214 [==============================] - 14s 12ms/step - loss: 0.0225 - mae: 0.1061
Epoch 16/30
1214/1214 [==============================] - 14s 12ms/step - loss: 0.0232 - mae: 0.1074
Epoch 16/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0219 - mae: 0.1052
1028/1214 [========================>.....] - ETA: 1s - loss: 0.0219 - mae: 0.1044Epoch 17/30
1214/1214 [==============================] - 13s 10ms/step - loss: 0.0222 - mae: 0.1069
Epoch 17/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0224 - mae: 0.1058
Epoch 17/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0221 - mae: 0.1050
Epoch 17/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0223 - mae: 0.1061
Epoch 17/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0223 - mae: 0.1066
Epoch 17/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0217 - mae: 0.1039
23/1214 [..............................] - ETA: 11s - loss: 0.0191 - mae: 0.1093Epoch 17/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0221 - mae: 0.1051
Epoch 17/30
1214/1214 [==============================] - 12s 9ms/step - loss: 0.0211 - mae: 0.1034
Epoch 18/30
1214/1214 [==============================] - 12s 9ms/step - loss: 0.0213 - mae: 0.1047
1074/1214 [=========================>....] - ETA: 1s - loss: 0.0216 - mae: 0.1045Epoch 18/30
1214/1214 [==============================] - 11s 9ms/step - loss: 0.0215 - mae: 0.1040
Epoch 18/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0213 - mae: 0.1033
Epoch 18/30
1214/1214 [==============================] - 11s 9ms/step - loss: 0.0215 - mae: 0.1044
Epoch 18/30
1214/1214 [==============================] - 12s 9ms/step - loss: 0.0213 - mae: 0.1045
Epoch 18/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0209 - mae: 0.1022
114/1214 [=>............................] - ETA: 10s - loss: 0.0221 - mae: 0.1014Epoch 18/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0213 - mae: 0.1032
171/1214 [===>..........................] - ETA: 9s - loss: 0.0204 - mae: 0.1039Epoch 18/30
1214/1214 [==============================] - 11s 9ms/step - loss: 0.0206 - mae: 0.1028
Epoch 19/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0204 - mae: 0.1019
Epoch 19/30
1214/1214 [==============================] - 11s 9ms/step - loss: 0.0209 - mae: 0.1023
Epoch 19/30
1214/1214 [==============================] - 11s 9ms/step - loss: 0.0206 - mae: 0.1017
46/1214 [>.............................] - ETA: 10s - loss: 0.0345 - mae: 0.1079Epoch 19/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0209 - mae: 0.1028
54/1214 [>.............................] - ETA: 12s - loss: 0.0285 - mae: 0.1039Epoch 19/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0207 - mae: 0.1030
153/1214 [==>...........................] - ETA: 10s - loss: 0.0197 - mae: 0.1010Epoch 19/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0203 - mae: 0.1010
Epoch 19/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0207 - mae: 0.1017
118/1214 [=>............................] - ETA: 10s - loss: 0.0173 - mae: 0.0962Epoch 19/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0203 - mae: 0.1010
Epoch 20/30
1214/1214 [==============================] - 13s 10ms/step - loss: 0.0200 - mae: 0.1014
7/1214 [..............................] - ETA: 11s - loss: 0.0177 - mae: 0.1039Epoch 20/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0198 - mae: 0.1005
Epoch 20/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0201 - mae: 0.1003
Epoch 20/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0204 - mae: 0.1016
Epoch 20/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0201 - mae: 0.1015
Epoch 20/30
1214/1214 [==============================] - 13s 10ms/step - loss: 0.0197 - mae: 0.0993
229/1214 [====>.........................] - ETA: 9s - loss: 0.0204 - mae: 0.1004Epoch 20/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0201 - mae: 0.1001
Epoch 20/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0198 - mae: 0.0997
Epoch 21/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0195 - mae: 0.0998
Epoch 21/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0193 - mae: 0.0991
Epoch 21/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0195 - mae: 0.0990
1066/1214 [=========================>....] - ETA: 1s - loss: 0.0193 - mae: 0.0983Epoch 21/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0199 - mae: 0.1004
1134/1214 [===========================>..] - ETA: 0s - loss: 0.0192 - mae: 0.0981Epoch 21/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0197 - mae: 0.1004
Epoch 21/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0193 - mae: 0.0982
236/1214 [====>.........................] - ETA: 10s - loss: 0.0168 - mae: 0.0979Epoch 21/30
1214/1214 [==============================] - 13s 10ms/step - loss: 0.0196 - mae: 0.0989
Epoch 21/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0194 - mae: 0.0985
Epoch 22/30
1214/1214 [==============================] - 13s 10ms/step - loss: 0.0191 - mae: 0.0987
1130/1214 [==========================>...] - ETA: 0s - loss: 0.0189 - mae: 0.0981Epoch 22/30
1214/1214 [==============================] - 13s 10ms/step - loss: 0.0189 - mae: 0.0980
Epoch 22/30
1214/1214 [==============================] - 13s 10ms/step - loss: 0.0192 - mae: 0.0980
Epoch 22/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0195 - mae: 0.0990
Epoch 22/30
1214/1214 [==============================] - 13s 10ms/step - loss: 0.0193 - mae: 0.0995
39/1214 [..............................] - ETA: 11s - loss: 0.0176 - mae: 0.0962Epoch 22/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0189 - mae: 0.0973
Epoch 22/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0192 - mae: 0.0977
354/1214 [=======>......................] - ETA: 7s - loss: 0.0182 - mae: 0.0972Epoch 22/30
1214/1214 [==============================] - 13s 10ms/step - loss: 0.0190 - mae: 0.0974
Epoch 23/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0187 - mae: 0.0976
Epoch 23/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0185 - mae: 0.0971
Epoch 23/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0192 - mae: 0.0983
105/1214 [=>............................] - ETA: 10s - loss: 0.0179 - mae: 0.0954Epoch 23/30
1214/1214 [==============================] - 13s 10ms/step - loss: 0.0188 - mae: 0.0969
1155/1214 [===========================>..] - ETA: 0s - loss: 0.0189 - mae: 0.0982Epoch 23/30
1214/1214 [==============================] - 13s 10ms/step - loss: 0.0189 - mae: 0.0983
Epoch 23/30
1214/1214 [==============================] - 13s 10ms/step - loss: 0.0186 - mae: 0.0963
Epoch 23/30
1214/1214 [==============================] - 13s 10ms/step - loss: 0.0188 - mae: 0.0969
Epoch 23/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0187 - mae: 0.0966
Epoch 24/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0183 - mae: 0.0967
Epoch 24/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0182 - mae: 0.0964
956/1214 [======================>.......] - ETA: 2s - loss: 0.0185 - mae: 0.0959Epoch 24/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0188 - mae: 0.0971
Epoch 24/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0184 - mae: 0.0961
Epoch 24/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0187 - mae: 0.0975
Epoch 24/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0183 - mae: 0.0956
Epoch 24/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0185 - mae: 0.0959
Epoch 24/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0183 - mae: 0.0957
1077/1214 [=========================>....] - ETA: 1s - loss: 0.0183 - mae: 0.0953Epoch 25/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0179 - mae: 0.0953
Epoch 25/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0179 - mae: 0.0953
Epoch 25/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0185 - mae: 0.0966
Epoch 25/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0181 - mae: 0.0952
1178/1214 [============================>.] - ETA: 0s - loss: 0.0182 - mae: 0.0969Epoch 25/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0184 - mae: 0.0970
Epoch 25/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0181 - mae: 0.0949
Epoch 25/30
1214/1214 [==============================] - 14s 12ms/step - loss: 0.0182 - mae: 0.0952
339/1214 [=======>......................] - ETA: 10s - loss: 0.0183 - mae: 0.0962Epoch 25/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0180 - mae: 0.0950
Epoch 26/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0177 - mae: 0.0949
1070/1214 [=========================>....] - ETA: 1s - loss: 0.0178 - mae: 0.0943Epoch 26/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0177 - mae: 0.0948
Epoch 26/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0183 - mae: 0.0961
85/1214 [=>............................] - ETA: 11s - loss: 0.0192 - mae: 0.0936Epoch 26/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0178 - mae: 0.0944
Epoch 26/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0182 - mae: 0.0963
Epoch 26/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0179 - mae: 0.0944
Epoch 26/30
1214/1214 [==============================] - 13s 10ms/step - loss: 0.0179 - mae: 0.0943
Epoch 26/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0178 - mae: 0.0945
Epoch 27/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0174 - mae: 0.0940
1064/1214 [=========================>....] - ETA: 1s - loss: 0.0178 - mae: 0.0937Epoch 27/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0174 - mae: 0.0941
Epoch 27/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0180 - mae: 0.0951
Epoch 27/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0176 - mae: 0.0936
Epoch 27/30
1214/1214 [==============================] - 13s 10ms/step - loss: 0.0179 - mae: 0.0955
1186/1214 [============================>.] - ETA: 0s - loss: 0.0176 - mae: 0.0938Epoch 27/30
1214/1214 [==============================] - 13s 10ms/step - loss: 0.0176 - mae: 0.0937
Epoch 27/30
1214/1214 [==============================] - 13s 10ms/step - loss: 0.0177 - mae: 0.0937
Epoch 27/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0176 - mae: 0.0937
1147/1214 [===========================>..] - ETA: 0s - loss: 0.0173 - mae: 0.0934Epoch 28/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0171 - mae: 0.0933
Epoch 28/30
1214/1214 [==============================] - 12s 9ms/step - loss: 0.0171 - mae: 0.0933
Epoch 28/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0179 - mae: 0.0949
Epoch 28/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0173 - mae: 0.0931
Epoch 28/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0177 - mae: 0.0949
165/1214 [===>..........................] - ETA: 9s - loss: 0.0174 - mae: 0.0950Epoch 28/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0174 - mae: 0.0930
314/1214 [======>.......................] - ETA: 8s - loss: 0.0181 - mae: 0.0930Epoch 28/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0175 - mae: 0.0931
Epoch 28/30
1214/1214 [==============================] - 12s 9ms/step - loss: 0.0173 - mae: 0.0932
1085/1214 [=========================>....] - ETA: 1s - loss: 0.0170 - mae: 0.0930Epoch 29/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0169 - mae: 0.0925
Epoch 29/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0170 - mae: 0.0929
Epoch 29/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0177 - mae: 0.0942
64/1214 [>.............................] - ETA: 10s - loss: 0.0178 - mae: 0.0891Epoch 29/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0170 - mae: 0.0921
Epoch 29/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0172 - mae: 0.0925
Epoch 29/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0175 - mae: 0.0944
Epoch 29/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0173 - mae: 0.0924
505/1214 [===========>..................] - ETA: 6s - loss: 0.0166 - mae: 0.0926Epoch 29/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0172 - mae: 0.0926
Epoch 30/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0168 - mae: 0.0922
Epoch 30/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0168 - mae: 0.0923
Epoch 30/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0175 - mae: 0.0939
Epoch 30/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0168 - mae: 0.0916
Epoch 30/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0170 - mae: 0.0922
339/1214 [=======>......................] - ETA: 8s - loss: 0.0186 - mae: 0.0937Epoch 30/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0173 - mae: 0.0940
Epoch 30/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0171 - mae: 0.0920
Epoch 30/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0170 - mae: 0.0924
135/135 [==============================] - 1s 4ms/step loss: 0.0165 - mae: 0.093
758/1214 [=================>............] - ETA: 4s - loss: 0.0161 - mae: 0.0911[CV] END ...........keras_model__model__learning_rate=0.0001; total time= 6.0min
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0166 - mae: 0.0918
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0166 - mae: 0.0916
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0172 - mae: 0.0929
1060/1214 [=========================>....] - ETA: 1s - loss: 0.0171 - mae: 0.0938Epoch 1/30
135/135 [==============================] - 1s 4ms/step loss: 0.0171 - mae: 0.090
135/135 [==============================] - 1s 5ms/step loss: 0.0165 - mae: 0.090
886/1214 [====================>.........] - ETA: 3s - loss: 0.0163 - mae: 0.0912[CV] END ...........keras_model__model__learning_rate=0.0001; total time= 6.0min
1091/1214 [=========================>....] - ETA: 1s - loss: 0.0169 - mae: 0.0916[CV] END ...........keras_model__model__learning_rate=0.0001; total time= 6.0min
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0166 - mae: 0.0909
135/135 [==============================] - 1s 5ms/step loss: 0.0170 - mae: 0.093
1/135 [..............................] - ETA: 20s[CV] END ...........keras_model__model__learning_rate=0.0001; total time= 6.0min
1179/1214 [============================>.] - ETA: 0s - loss: 0.0169 - mae: 0.0914Epoch 1/30
135/135 [==============================] - 1s 4ms/step- loss: 61.0827 - mae: 7.72
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0172 - mae: 0.0935
Epoch 1/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0168 - mae: 0.0913
1019/1214 [========================>.....] - ETA: 1s - loss: 0.0167 - mae: 0.0912[CV] END ...........keras_model__model__learning_rate=0.0001; total time= 6.0min
1040/1214 [========================>.....] - ETA: 1s - loss: 0.0171 - mae: 0.0914Epoch 1/30
135/135 [==============================] - 1s 3ms/step loss: 0.0169 - mae: 0.09873
135/135 [==============================] - 1s 3ms/step loss: 55.2491 - mae: 7.33
1113/1214 [==========================>...] - ETA: 0s - loss: 0.0170 - mae: 0.0914[CV] END ...........keras_model__model__learning_rate=0.0001; total time= 6.1min
[CV] END ...........keras_model__model__learning_rate=0.0001; total time= 6.1min
14/1214 [..............................] - ETA: 9s - loss: 66.4678 - mae: 8.0755Epoch 1/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0170 - mae: 0.0914
68/1214 [>.............................] - ETA: 11s - loss: 27.2838 - mae: 4.6498Epoch 1/30
189/1214 [===>..........................] - ETA: 8s - loss: 48.3563 - mae: 6.8455Epoch 1/30
135/135 [==============================] - 1s 4ms/step loss: 14.2729 - mae: 2.793
205/1214 [====>.........................] - ETA: 9s - loss: 14.4026 - mae: 2.6518[CV] END ...........keras_model__model__learning_rate=0.0001; total time= 6.1min
77/1214 [>.............................] - ETA: 12s - loss: 28.3785 - mae: 4.6790Epoch 1/30
1214/1214 [==============================] - 13s 10ms/step - loss: 15.3267 - mae: 2.9497
Epoch 2/30
1214/1214 [==============================] - 13s 10ms/step - loss: 13.6043 - mae: 2.7586
910/1214 [=====================>........] - ETA: 3s - loss: 2.7740 - mae: 0.7890Epoch 2/30
1214/1214 [==============================] - 13s 10ms/step - loss: 2.6196 - mae: 0.7087
Epoch 2/30
1214/1214 [==============================] - 13s 10ms/step - loss: 1.8258 - mae: 0.5974
Epoch 2/30
1214/1214 [==============================] - 13s 10ms/step - loss: 2.1875 - mae: 0.6635
Epoch 2/30
1214/1214 [==============================] - 14s 10ms/step - loss: 2.0310 - mae: 0.6490
229/1214 [====>.........................] - ETA: 9s - loss: 0.0879 - mae: 0.1865Epoch 2/30
1214/1214 [==============================] - 14s 11ms/step - loss: 2.1052 - mae: 0.6417
411/1214 [=========>....................] - ETA: 8s - loss: 0.9344 - mae: 0.7440Epoch 2/30
1214/1214 [==============================] - 14s 11ms/step - loss: 1.9284 - mae: 0.6208
326/1214 [=======>......................] - ETA: 9s - loss: 0.0747 - mae: 0.1745Epoch 2/30
1214/1214 [==============================] - 13s 10ms/step - loss: 0.7106 - mae: 0.6254
Epoch 3/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.6343 - mae: 0.5892
Epoch 3/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0540 - mae: 0.1523
Epoch 3/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0445 - mae: 0.1446
Epoch 3/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0531 - mae: 0.1508
Epoch 3/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0648 - mae: 0.1613
Epoch 3/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0498 - mae: 0.1485
Epoch 3/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0493 - mae: 0.1467
Epoch 3/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.3641 - mae: 0.4266
Epoch 4/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.3480 - mae: 0.4148
Epoch 4/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0266 - mae: 0.1160
991/1214 [=======================>......] - ETA: 2s - loss: 0.0330 - mae: 0.1251Epoch 4/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0273 - mae: 0.1193
863/1214 [====================>.........] - ETA: 3s - loss: 0.0309 - mae: 0.1218Epoch 4/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0267 - mae: 0.1176
1040/1214 [========================>.....] - ETA: 1s - loss: 0.0285 - mae: 0.1210Epoch 4/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0318 - mae: 0.1231
Epoch 4/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0282 - mae: 0.1200
169/1214 [===>..........................] - ETA: 11s - loss: 0.0232 - mae: 0.1095Epoch 4/30
1214/1214 [==============================] - 13s 10ms/step - loss: 0.0293 - mae: 0.1203
461/1214 [==========>...................] - ETA: 7s - loss: 0.2398 - mae: 0.3334Epoch 4/30
1214/1214 [==============================] - 13s 10ms/step - loss: 0.2182 - mae: 0.3203
Epoch 5/30
1214/1214 [==============================] - 13s 10ms/step - loss: 0.2165 - mae: 0.3110
Epoch 5/30
1214/1214 [==============================] - 13s 10ms/step - loss: 0.0221 - mae: 0.1067
808/1214 [==================>...........] - ETA: 4s - loss: 0.0247 - mae: 0.1114Epoch 5/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0229 - mae: 0.1094
Epoch 5/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0221 - mae: 0.1079
Epoch 5/30
1214/1214 [==============================] - 13s 10ms/step - loss: 0.0243 - mae: 0.1103
369/1214 [========>.....................] - ETA: 8s - loss: 0.1535 - mae: 0.2617Epoch 5/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0238 - mae: 0.1106
Epoch 5/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0236 - mae: 0.1102
Epoch 5/30
1214/1214 [==============================] - 13s 10ms/step - loss: 0.1401 - mae: 0.2436
Epoch 6/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.1431 - mae: 0.2388
1092/1214 [=========================>....] - ETA: 1s - loss: 0.0212 - mae: 0.1050Epoch 6/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0206 - mae: 0.1034
154/1214 [==>...........................] - ETA: 10s - loss: 0.1075 - mae: 0.2079Epoch 6/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0209 - mae: 0.1046
135/1214 [==>...........................] - ETA: 11s - loss: 0.1198 - mae: 0.2148Epoch 6/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0204 - mae: 0.1038
Epoch 6/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0219 - mae: 0.1054
Epoch 6/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0213 - mae: 0.1053
Epoch 6/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0211 - mae: 0.1057
Epoch 6/30
1214/1214 [==============================] - 14s 12ms/step - loss: 0.0973 - mae: 0.1980
1146/1214 [===========================>..] - ETA: 0s - loss: 0.1007 - mae: 0.1964Epoch 7/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0997 - mae: 0.1955
961/1214 [======================>.......] - ETA: 2s - loss: 0.0210 - mae: 0.1034Epoch 7/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0198 - mae: 0.1012
Epoch 7/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0197 - mae: 0.1016
998/1214 [=======================>......] - ETA: 2s - loss: 0.0204 - mae: 0.1030Epoch 7/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0195 - mae: 0.1018
Epoch 7/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0207 - mae: 0.1029
Epoch 7/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0202 - mae: 0.1026
Epoch 7/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0196 - mae: 0.1021
Epoch 7/30
1214/1214 [==============================] - 13s 10ms/step - loss: 0.0708 - mae: 0.1707
Epoch 8/30
1214/1214 [==============================] - 13s 10ms/step - loss: 0.0716 - mae: 0.1683
1028/1214 [========================>.....] - ETA: 1s - loss: 0.0187 - mae: 0.0990Epoch 8/30
1214/1214 [==============================] - 13s 10ms/step - loss: 0.0192 - mae: 0.0994
906/1214 [=====================>........] - ETA: 3s - loss: 0.0191 - mae: 0.1003Epoch 8/30
1214/1214 [==============================] - 13s 10ms/step - loss: 0.0189 - mae: 0.0998
129/1214 [==>...........................] - ETA: 11s - loss: 0.0654 - mae: 0.1590Epoch 8/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0185 - mae: 0.0990
912/1214 [=====================>........] - ETA: 3s - loss: 0.0187 - mae: 0.1005Epoch 8/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0197 - mae: 0.1003
Epoch 8/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0192 - mae: 0.1000
1075/1214 [=========================>....] - ETA: 1s - loss: 0.0187 - mae: 0.1003Epoch 8/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0188 - mae: 0.1001
Epoch 8/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0543 - mae: 0.1523
Epoch 9/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0541 - mae: 0.1505
Epoch 9/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0191 - mae: 0.0997
746/1214 [=================>............] - ETA: 4s - loss: 0.0178 - mae: 0.0975Epoch 9/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0182 - mae: 0.0979
Epoch 9/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0181 - mae: 0.0982
Epoch 9/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0187 - mae: 0.0980
250/1214 [=====>........................] - ETA: 11s - loss: 0.0506 - mae: 0.1459Epoch 9/30
1214/1214 [==============================] - 13s 10ms/step - loss: 0.0187 - mae: 0.0988
Epoch 9/30
1214/1214 [==============================] - 13s 10ms/step - loss: 0.0180 - mae: 0.0975
Epoch 9/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0441 - mae: 0.1402
Epoch 10/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0181 - mae: 0.0965
708/1214 [================>.............] - ETA: 5s - loss: 0.0172 - mae: 0.0965Epoch 10/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0433 - mae: 0.1388
967/1214 [======================>.......] - ETA: 2s - loss: 0.0184 - mae: 0.0977Epoch 10/30
1214/1214 [==============================] - 14s 12ms/step - loss: 0.0177 - mae: 0.0963
Epoch 10/30
1214/1214 [==============================] - 14s 12ms/step - loss: 0.0171 - mae: 0.0949
Epoch 10/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0188 - mae: 0.0984
74/1214 [>.............................] - ETA: 14s - loss: 0.0164 - mae: 0.1003Epoch 10/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0177 - mae: 0.0956
142/1214 [==>...........................] - ETA: 14s - loss: 0.0176 - mae: 0.0972Epoch 10/30
1214/1214 [==============================] - 14s 12ms/step - loss: 0.0176 - mae: 0.0967
499/1214 [===========>..................] - ETA: 8s - loss: 0.0186 - mae: 0.0969Epoch 10/30
1214/1214 [==============================] - 14s 12ms/step - loss: 0.0374 - mae: 0.1311
843/1214 [===================>..........] - ETA: 4s - loss: 0.0175 - mae: 0.0952Epoch 11/30
1214/1214 [==============================] - 14s 12ms/step - loss: 0.0176 - mae: 0.0952
Epoch 11/30
1214/1214 [==============================] - 14s 12ms/step - loss: 0.0361 - mae: 0.1297
107/1214 [=>............................] - ETA: 12s - loss: 0.0363 - mae: 0.1265Epoch 11/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0172 - mae: 0.0948
Epoch 11/30
1214/1214 [==============================] - 14s 12ms/step - loss: 0.0172 - mae: 0.0953
Epoch 11/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0176 - mae: 0.0948
Epoch 11/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0179 - mae: 0.0969
Epoch 11/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0168 - mae: 0.0946
285/1214 [======>.......................] - ETA: 10s - loss: 0.0177 - mae: 0.0944Epoch 11/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0327 - mae: 0.1245
Epoch 12/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0176 - mae: 0.0955
Epoch 12/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0314 - mae: 0.1230
722/1214 [================>.............] - ETA: 5s - loss: 0.0168 - mae: 0.0937Epoch 12/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0169 - mae: 0.0941
Epoch 12/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0166 - mae: 0.0938
1071/1214 [=========================>....] - ETA: 1s - loss: 0.0169 - mae: 0.0934Epoch 12/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0177 - mae: 0.0955
Epoch 12/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0171 - mae: 0.0939
65/1214 [>.............................] - ETA: 11s - loss: 0.0286 - mae: 0.1010Epoch 12/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0165 - mae: 0.0935
Epoch 12/30
1214/1214 [==============================] - 13s 10ms/step - loss: 0.0294 - mae: 0.1190
Epoch 13/30
1214/1214 [==============================] - 13s 10ms/step - loss: 0.0171 - mae: 0.0939
1061/1214 [=========================>....] - ETA: 1s - loss: 0.0160 - mae: 0.0922Epoch 13/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0282 - mae: 0.1177
Epoch 13/30
1214/1214 [==============================] - 13s 10ms/step - loss: 0.0165 - mae: 0.0926
Epoch 13/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0165 - mae: 0.0930
Epoch 13/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0173 - mae: 0.0941
273/1214 [=====>........................] - ETA: 9s - loss: 0.0223 - mae: 0.1112 Epoch 13/30
1214/1214 [==============================] - 13s 10ms/step - loss: 0.0168 - mae: 0.0933
Epoch 13/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0160 - mae: 0.0917
Epoch 13/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0269 - mae: 0.1145
931/1214 [======================>.......] - ETA: 2s - loss: 0.0157 - mae: 0.0909Epoch 14/30
1214/1214 [==============================] - 13s 10ms/step - loss: 0.0258 - mae: 0.1132
109/1214 [=>............................] - ETA: 12s - loss: 0.0253 - mae: 0.1122Epoch 14/30
1214/1214 [==============================] - 13s 10ms/step - loss: 0.0172 - mae: 0.0946
903/1214 [=====================>........] - ETA: 3s - loss: 0.0171 - mae: 0.0932Epoch 14/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0161 - mae: 0.0917
Epoch 14/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0167 - mae: 0.0938
Epoch 14/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0170 - mae: 0.0931
Epoch 14/30
1214/1214 [==============================] - 13s 10ms/step - loss: 0.0169 - mae: 0.0935
1031/1214 [========================>.....] - ETA: 1s - loss: 0.0157 - mae: 0.0909Epoch 14/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0158 - mae: 0.0910
Epoch 14/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0251 - mae: 0.1112
Epoch 15/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0242 - mae: 0.1097
Epoch 15/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0164 - mae: 0.0921
Epoch 15/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0164 - mae: 0.0930
279/1214 [=====>........................] - ETA: 10s - loss: 0.0219 - mae: 0.1075Epoch 15/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0161 - mae: 0.0922
Epoch 15/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0166 - mae: 0.0922
Epoch 15/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0166 - mae: 0.0924
Epoch 15/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0160 - mae: 0.0921
Epoch 15/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0236 - mae: 0.1079
916/1214 [=====================>........] - ETA: 3s - loss: 0.0153 - mae: 0.0899Epoch 16/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0163 - mae: 0.0917
Epoch 16/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0230 - mae: 0.1071
105/1214 [=>............................] - ETA: 11s - loss: 0.0200 - mae: 0.1052Epoch 16/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0160 - mae: 0.0915
Epoch 16/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0157 - mae: 0.0903
13/1214 [..............................] - ETA: 11s - loss: 0.0114 - mae: 0.0819Epoch 16/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0169 - mae: 0.0930
Epoch 16/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0164 - mae: 0.0916
172/1214 [===>..........................] - ETA: 10s - loss: 0.0191 - mae: 0.0938Epoch 16/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0150 - mae: 0.0889
561/1214 [============>.................] - ETA: 6s - loss: 0.0227 - mae: 0.1060Epoch 16/30
1214/1214 [==============================] - 13s 10ms/step - loss: 0.0225 - mae: 0.1058
895/1214 [=====================>........] - ETA: 3s - loss: 0.0149 - mae: 0.0900Epoch 17/30
1214/1214 [==============================] - 13s 10ms/step - loss: 0.0158 - mae: 0.0900
Epoch 17/30
1214/1214 [==============================] - 13s 10ms/step - loss: 0.0219 - mae: 0.1047
1005/1214 [=======================>......] - ETA: 2s - loss: 0.0153 - mae: 0.0902Epoch 17/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0155 - mae: 0.0903
Epoch 17/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0156 - mae: 0.0903
Epoch 17/30
1214/1214 [==============================] - 13s 10ms/step - loss: 0.0163 - mae: 0.0913
Epoch 17/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0162 - mae: 0.0911
Epoch 17/30
1214/1214 [==============================] - 13s 10ms/step - loss: 0.0150 - mae: 0.0885
Epoch 17/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0216 - mae: 0.1036
Epoch 18/30
1214/1214 [==============================] - 13s 10ms/step - loss: 0.0159 - mae: 0.0906
Epoch 18/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0212 - mae: 0.1029
35/1214 [..............................] - ETA: 13s - loss: 0.0123 - mae: 0.0862Epoch 18/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0158 - mae: 0.0911
290/1214 [======>.......................] - ETA: 9s - loss: 0.0215 - mae: 0.1026Epoch 18/30
1214/1214 [==============================] - 13s 10ms/step - loss: 0.0156 - mae: 0.0902
18/1214 [..............................] - ETA: 12s - loss: 0.0111 - mae: 0.0806Epoch 18/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0159 - mae: 0.0905
183/1214 [===>..........................] - ETA: 11s - loss: 0.0147 - mae: 0.0864Epoch 18/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0165 - mae: 0.0919
Epoch 18/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0155 - mae: 0.0907
209/1214 [====>.........................] - ETA: 11s - loss: 0.0180 - mae: 0.0938Epoch 18/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0208 - mae: 0.1018
1075/1214 [=========================>....] - ETA: 1s - loss: 0.0205 - mae: 0.1012Epoch 19/30
1214/1214 [==============================] - 13s 10ms/step - loss: 0.0154 - mae: 0.0889
Epoch 19/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0205 - mae: 0.1014
Epoch 19/30
1214/1214 [==============================] - 13s 10ms/step - loss: 0.0152 - mae: 0.0890
Epoch 19/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0156 - mae: 0.0910
165/1214 [===>..........................] - ETA: 11s - loss: 0.0235 - mae: 0.1008Epoch 19/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0159 - mae: 0.0907
487/1214 [===========>..................] - ETA: 7s - loss: 0.0193 - mae: 0.1003Epoch 19/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0161 - mae: 0.0909
215/1214 [====>.........................] - ETA: 11s - loss: 0.0150 - mae: 0.0915Epoch 19/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0150 - mae: 0.0890
Epoch 19/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0202 - mae: 0.1002
895/1214 [=====================>........] - ETA: 3s - loss: 0.0155 - mae: 0.0893Epoch 20/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0156 - mae: 0.0898
Epoch 20/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0200 - mae: 0.1001
Epoch 20/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0150 - mae: 0.0894
Epoch 20/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0152 - mae: 0.0888
Epoch 20/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0159 - mae: 0.0904
Epoch 20/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0163 - mae: 0.0915
Epoch 20/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0149 - mae: 0.0889
Epoch 20/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0197 - mae: 0.0992
Epoch 21/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0153 - mae: 0.0889
778/1214 [==================>...........] - ETA: 4s - loss: 0.0156 - mae: 0.0895Epoch 21/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0194 - mae: 0.0988
Epoch 21/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0150 - mae: 0.0893
Epoch 21/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0152 - mae: 0.0893
1006/1214 [=======================>......] - ETA: 2s - loss: 0.0162 - mae: 0.0903Epoch 21/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0156 - mae: 0.0895
347/1214 [=======>......................] - ETA: 8s - loss: 0.0200 - mae: 0.0982Epoch 21/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0158 - mae: 0.0901
472/1214 [==========>...................] - ETA: 7s - loss: 0.0148 - mae: 0.0890Epoch 21/30
1214/1214 [==============================] - 13s 10ms/step - loss: 0.0149 - mae: 0.0886
751/1214 [=================>............] - ETA: 4s - loss: 0.0198 - mae: 0.0984Epoch 21/30
1214/1214 [==============================] - 13s 10ms/step - loss: 0.0193 - mae: 0.0981
611/1214 [==============>...............] - ETA: 6s - loss: 0.0160 - mae: 0.0914Epoch 22/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0152 - mae: 0.0889
109/1214 [=>............................] - ETA: 13s - loss: 0.0197 - mae: 0.0966Epoch 22/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0189 - mae: 0.0975
Epoch 22/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0148 - mae: 0.0888
Epoch 22/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0150 - mae: 0.0887
Epoch 22/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0155 - mae: 0.0894
Epoch 22/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0159 - mae: 0.0904
Epoch 22/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0146 - mae: 0.0875
Epoch 22/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0188 - mae: 0.0967
Epoch 23/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0149 - mae: 0.0880
971/1214 [======================>.......] - ETA: 2s - loss: 0.0151 - mae: 0.0903Epoch 23/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0185 - mae: 0.0965
Epoch 23/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0150 - mae: 0.0900
Epoch 23/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0150 - mae: 0.0885
Epoch 23/30
1214/1214 [==============================] - 13s 10ms/step - loss: 0.0156 - mae: 0.0899
Epoch 23/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0157 - mae: 0.0897
503/1214 [===========>..................] - ETA: 7s - loss: 0.0153 - mae: 0.0882Epoch 23/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0147 - mae: 0.0882
Epoch 23/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0185 - mae: 0.0957
Epoch 24/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0148 - mae: 0.0875
Epoch 24/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0182 - mae: 0.0954
Epoch 24/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0146 - mae: 0.0884
Epoch 24/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0147 - mae: 0.0879
Epoch 24/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0152 - mae: 0.0884
Epoch 24/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0157 - mae: 0.0897
1041/1214 [========================>.....] - ETA: 1s - loss: 0.0144 - mae: 0.0871Epoch 24/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0143 - mae: 0.0870
Epoch 24/30
1214/1214 [==============================] - 13s 10ms/step - loss: 0.0182 - mae: 0.0949
849/1214 [===================>..........] - ETA: 3s - loss: 0.0149 - mae: 0.0873Epoch 25/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0148 - mae: 0.0878
Epoch 25/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0178 - mae: 0.0947
Epoch 25/30
1214/1214 [==============================] - 13s 10ms/step - loss: 0.0147 - mae: 0.0887
1032/1214 [========================>.....] - ETA: 1s - loss: 0.0147 - mae: 0.0880Epoch 25/30
1214/1214 [==============================] - 13s 10ms/step - loss: 0.0146 - mae: 0.0873
1076/1214 [=========================>....] - ETA: 1s - loss: 0.0147 - mae: 0.0880Epoch 25/30
1214/1214 [==============================] - 13s 10ms/step - loss: 0.0152 - mae: 0.0886
1114/1214 [==========================>...] - ETA: 1s - loss: 0.0155 - mae: 0.0889Epoch 25/30
1214/1214 [==============================] - 13s 10ms/step - loss: 0.0154 - mae: 0.0889
Epoch 25/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0143 - mae: 0.0869
Epoch 25/30
1214/1214 [==============================] - 13s 10ms/step - loss: 0.0180 - mae: 0.0948
696/1214 [================>.............] - ETA: 5s - loss: 0.0145 - mae: 0.0873Epoch 26/30
1214/1214 [==============================] - 13s 10ms/step - loss: 0.0145 - mae: 0.0868
Epoch 26/30
1214/1214 [==============================] - 13s 10ms/step - loss: 0.0175 - mae: 0.0937
Epoch 26/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0145 - mae: 0.0880
Epoch 26/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0148 - mae: 0.0879
Epoch 26/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0152 - mae: 0.0883
323/1214 [======>.......................] - ETA: 9s - loss: 0.0171 - mae: 0.0932Epoch 26/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0157 - mae: 0.0903
Epoch 26/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0143 - mae: 0.0869
619/1214 [==============>...............] - ETA: 6s - loss: 0.0171 - mae: 0.0928Epoch 26/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0177 - mae: 0.0937
Epoch 27/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0145 - mae: 0.0868
973/1214 [=======================>......] - ETA: 2s - loss: 0.0146 - mae: 0.0881Epoch 27/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0172 - mae: 0.0928
Epoch 27/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0145 - mae: 0.0879
Epoch 27/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0145 - mae: 0.0876
Epoch 27/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0148 - mae: 0.0867
Epoch 27/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0154 - mae: 0.0889
128/1214 [==>...........................] - ETA: 13s - loss: 0.0158 - mae: 0.0901Epoch 27/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0143 - mae: 0.0869
Epoch 27/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0175 - mae: 0.0930
Epoch 28/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0144 - mae: 0.0867
910/1214 [=====================>........] - ETA: 3s - loss: 0.0146 - mae: 0.0870Epoch 28/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0169 - mae: 0.0922
Epoch 28/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0142 - mae: 0.0871
Epoch 28/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0143 - mae: 0.0861
Epoch 28/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0151 - mae: 0.0881
Epoch 28/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0154 - mae: 0.0890
262/1214 [=====>........................] - ETA: 12s - loss: 0.0142 - mae: 0.0859Epoch 28/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0142 - mae: 0.0868
Epoch 28/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0173 - mae: 0.0926
Epoch 29/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0143 - mae: 0.0866
Epoch 29/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0168 - mae: 0.0918
855/1214 [====================>.........] - ETA: 3s - loss: 0.0150 - mae: 0.0881Epoch 29/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0140 - mae: 0.0867
165/1214 [===>..........................] - ETA: 10s - loss: 0.0166 - mae: 0.0912Epoch 29/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0143 - mae: 0.0865
Epoch 29/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0149 - mae: 0.0875
177/1214 [===>..........................] - ETA: 11s - loss: 0.0126 - mae: 0.0838Epoch 29/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0154 - mae: 0.0889
Epoch 29/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0142 - mae: 0.0865
Epoch 29/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0171 - mae: 0.0921
573/1214 [=============>................] - ETA: 6s - loss: 0.0148 - mae: 0.0884Epoch 30/30
1214/1214 [==============================] - 13s 10ms/step - loss: 0.0143 - mae: 0.0864
Epoch 30/30
1214/1214 [==============================] - 13s 10ms/step - loss: 0.0165 - mae: 0.0909
1023/1214 [========================>.....] - ETA: 2s - loss: 0.0141 - mae: 0.0863Epoch 30/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0137 - mae: 0.0860
Epoch 30/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0140 - mae: 0.0854
1081/1214 [=========================>....] - ETA: 1s - loss: 0.0148 - mae: 0.0860Epoch 30/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0146 - mae: 0.0862
371/1214 [========>.....................] - ETA: 8s - loss: 0.0169 - mae: 0.0902Epoch 30/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0152 - mae: 0.0887
490/1214 [===========>..................] - ETA: 7s - loss: 0.0168 - mae: 0.0903Epoch 30/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0141 - mae: 0.0863
767/1214 [=================>............] - ETA: 4s - loss: 0.0143 - mae: 0.0863Epoch 30/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0169 - mae: 0.0915
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0142 - mae: 0.0861
135/135 [==============================] - 1s 5ms/step loss: 0.0135 - mae: 0.086
821/1214 [===================>..........] - ETA: 4s - loss: 0.0150 - mae: 0.0876[CV] END ............keras_model__model__learning_rate=0.001; total time= 6.5min
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0163 - mae: 0.0903
1003/1214 [=======================>......] - ETA: 2s - loss: 0.0143 - mae: 0.0865Epoch 1/30
135/135 [==============================] - 0s 3ms/step loss: 0.0147 - mae: 0.086
1054/1214 [=========================>....] - ETA: 1s - loss: 0.0141 - mae: 0.0862[CV] END ...........keras_model__model__learning_rate=0.0001; total time= 6.5min
1183/1214 [============================>.] - ETA: 0s - loss: 0.0140 - mae: 0.0866Epoch 1/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0139 - mae: 0.0865
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0142 - mae: 0.0862
135/135 [==============================] - 1s 4ms/step loss: 0.0159 - mae: 0.08817
1105/1214 [==========================>...] - ETA: 1s - loss: 0.0149 - mae: 0.0878[CV] END ............keras_model__model__learning_rate=0.001; total time= 6.6min
135/135 [==============================] - 1s 3ms/step loss: 7.8409 - mae: 1.83590
860/1214 [====================>.........] - ETA: 3s - loss: 0.0137 - mae: 0.0856[CV] END ............keras_model__model__learning_rate=0.001; total time= 6.5min
Epoch 1/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0148 - mae: 0.0876
178/1214 [===>..........................] - ETA: 9s - loss: 10.8553 - mae: 2.2560Epoch 1/30
135/135 [==============================] - 1s 4ms/step- loss: 32.1293 - mae: 5.350
61/1214 [>.............................] - ETA: 10s - loss: 28.9018 - mae: 4.9296[CV] END ............keras_model__model__learning_rate=0.001; total time= 6.6min
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0152 - mae: 0.0885
100/135 [=====================>........] - ETA: 0sEpoch 1/30: 31.6207 - mae: 5.180
135/135 [==============================] - 1s 4ms/step loss: 3.9145 - mae: 1.1057
336/1214 [=======>......................] - ETA: 8s - loss: 5.9732 - mae: 1.4317[CV] END ............keras_model__model__learning_rate=0.001; total time= 6.6min
394/1214 [========>.....................] - ETA: 7s - loss: 5.1452 - mae: 1.2828Epoch 1/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0141 - mae: 0.0862
135/135 [==============================] - 1s 4ms/step loss: 5.3950 - mae: 1.3487
560/1214 [============>.................] - ETA: 5s - loss: 3.7053 - mae: 1.0142[CV] END ............keras_model__model__learning_rate=0.001; total time= 6.6min
374/1214 [========>.....................] - ETA: 7s - loss: 6.2030 - mae: 1.4592Epoch 1/30
1214/1214 [==============================] - 12s 9ms/step - loss: 1.5270 - mae: 0.5654
Epoch 2/30
1214/1214 [==============================] - 12s 9ms/step - loss: 1.7838 - mae: 0.5988
Epoch 2/30
1214/1214 [==============================] - 12s 9ms/step - loss: 1.7806 - mae: 0.6062
1163/1214 [===========================>..] - ETA: 0s - loss: 2.1184 - mae: 0.6543Epoch 2/30
1214/1214 [==============================] - 12s 9ms/step - loss: 2.0342 - mae: 0.6354
Epoch 2/30
1214/1214 [==============================] - 12s 9ms/step - loss: 0.3402 - mae: 0.2490
Epoch 2/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.3485 - mae: 0.2434
Epoch 2/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.4628 - mae: 0.2667
Epoch 2/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0464 - mae: 0.1467
Epoch 3/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0497 - mae: 0.1491
494/1214 [===========>..................] - ETA: 7s - loss: 0.0310 - mae: 0.1286Epoch 3/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0572 - mae: 0.1565
Epoch 3/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0575 - mae: 0.1556
721/1214 [================>.............] - ETA: 5s - loss: 0.0314 - mae: 0.1309Epoch 3/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0337 - mae: 0.1368
Epoch 3/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0303 - mae: 0.1298
980/1214 [=======================>......] - ETA: 2s - loss: 0.0298 - mae: 0.1281Epoch 3/30
1214/1214 [==============================] - 13s 10ms/step - loss: 0.0294 - mae: 0.1281
550/1214 [============>.................] - ETA: 6s - loss: 0.0308 - mae: 0.1240Epoch 3/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0281 - mae: 0.1203
Epoch 4/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0303 - mae: 0.1225
Epoch 4/30
1214/1214 [==============================] - 13s 10ms/step - loss: 0.0287 - mae: 0.1201
Epoch 4/30
1214/1214 [==============================] - 13s 10ms/step - loss: 0.0277 - mae: 0.1180
Epoch 4/30
1214/1214 [==============================] - 13s 10ms/step - loss: 0.0320 - mae: 0.1339
Epoch 4/30
1214/1214 [==============================] - 13s 10ms/step - loss: 0.0321 - mae: 0.1351
275/1214 [=====>........................] - ETA: 9s - loss: 0.0255 - mae: 0.1123Epoch 4/30
1214/1214 [==============================] - 13s 10ms/step - loss: 0.0323 - mae: 0.1357
Epoch 4/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0236 - mae: 0.1112
Epoch 5/30
1214/1214 [==============================] - 13s 10ms/step - loss: 0.0244 - mae: 0.1107
Epoch 5/30
135/135 [==============================] - 1s 5ms/step loss: 0.0208 - mae: 0.109
1197/1214 [============================>.] - ETA: 0s - loss: 0.0231 - mae: 0.1081[CV] END ...........keras_model__model__learning_rate=0.0001; total time= 7.4min
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0231 - mae: 0.1081
Epoch 5/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0226 - mae: 0.1090
34/1214 [..............................] - ETA: 11s - loss: 0.0172 - mae: 0.1010Epoch 5/30
74/1214 [>.............................] - ETA: 11s - loss: 0.0174 - mae: 0.1016Epoch 1/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0283 - mae: 0.1251
Epoch 5/30
1214/1214 [==============================] - 13s 10ms/step - loss: 0.0289 - mae: 0.1274
536/1214 [============>.................] - ETA: 6s - loss: 0.0214 - mae: 0.1062Epoch 5/30
1214/1214 [==============================] - 13s 10ms/step - loss: 0.0281 - mae: 0.1257
Epoch 5/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0213 - mae: 0.1056
617/1214 [==============>...............] - ETA: 6s - loss: 0.0245 - mae: 0.1173Epoch 6/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0220 - mae: 0.1065
Epoch 6/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0216 - mae: 0.1050
889/1214 [====================>.........] - ETA: 3s - loss: 0.0273 - mae: 0.1232Epoch 6/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0210 - mae: 0.1053
1060/1214 [=========================>....] - ETA: 1s - loss: 0.0247 - mae: 0.1172Epoch 6/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.4082 - mae: 0.2502
Epoch 2/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0262 - mae: 0.1201
Epoch 6/30
1214/1214 [==============================] - 13s 10ms/step - loss: 0.0269 - mae: 0.1224
276/1214 [=====>........................] - ETA: 9s - loss: 0.0185 - mae: 0.1026Epoch 6/30
1214/1214 [==============================] - 13s 10ms/step - loss: 0.0285 - mae: 0.1264
777/1214 [==================>...........] - ETA: 4s - loss: 0.0207 - mae: 0.1033Epoch 6/30
1214/1214 [==============================] - 13s 10ms/step - loss: 0.0205 - mae: 0.1039
1024/1214 [========================>.....] - ETA: 2s - loss: 0.0210 - mae: 0.1036Epoch 7/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0203 - mae: 0.1027
Epoch 7/30
1214/1214 [==============================] - 13s 10ms/step - loss: 0.0202 - mae: 0.1011
Epoch 7/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0198 - mae: 0.1027
Epoch 7/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0308 - mae: 0.1314
Epoch 3/30
1214/1214 [==============================] - 13s 10ms/step - loss: 0.0233 - mae: 0.1132
626/1214 [==============>...............] - ETA: 5s - loss: 0.0196 - mae: 0.1005Epoch 7/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0248 - mae: 0.1173
Epoch 7/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0251 - mae: 0.1187
Epoch 7/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0193 - mae: 0.1000
584/1214 [=============>................] - ETA: 6s - loss: 0.0227 - mae: 0.1114Epoch 8/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0195 - mae: 0.1010
Epoch 8/30
1214/1214 [==============================] - 13s 10ms/step - loss: 0.0194 - mae: 0.0992
910/1214 [=====================>........] - ETA: 3s - loss: 0.0240 - mae: 0.1135Epoch 8/30
1214/1214 [==============================] - 13s 10ms/step - loss: 0.0193 - mae: 0.1011
Epoch 8/30
1214/1214 [==============================] - 13s 10ms/step - loss: 0.0305 - mae: 0.1312
Epoch 4/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0218 - mae: 0.1084
Epoch 8/30
1214/1214 [==============================] - 13s 10ms/step - loss: 0.0229 - mae: 0.1121
729/1214 [=================>............] - ETA: 5s - loss: 0.0180 - mae: 0.0986Epoch 8/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0246 - mae: 0.1171
Epoch 8/30
1214/1214 [==============================] - 13s 10ms/step - loss: 0.0188 - mae: 0.0990
761/1214 [=================>............] - ETA: 4s - loss: 0.0181 - mae: 0.0998Epoch 9/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0185 - mae: 0.0981
Epoch 9/30
1214/1214 [==============================] - 13s 10ms/step - loss: 0.0186 - mae: 0.0967
Epoch 9/30
1214/1214 [==============================] - 13s 10ms/step - loss: 0.0186 - mae: 0.0995
Epoch 9/30
1214/1214 [==============================] - 13s 10ms/step - loss: 0.0302 - mae: 0.1304
314/1214 [======>.......................] - ETA: 9s - loss: 0.0167 - mae: 0.0956Epoch 5/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0209 - mae: 0.1055
Epoch 9/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0227 - mae: 0.1114
Epoch 9/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0225 - mae: 0.1119
725/1214 [================>.............] - ETA: 5s - loss: 0.0165 - mae: 0.0952Epoch 9/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0183 - mae: 0.0976
Epoch 10/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0178 - mae: 0.0960
782/1214 [==================>...........] - ETA: 4s - loss: 0.0218 - mae: 0.1066Epoch 10/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0182 - mae: 0.0957
Epoch 10/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0180 - mae: 0.0976
Epoch 10/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0281 - mae: 0.1253
Epoch 6/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0208 - mae: 0.1053
Epoch 10/30
1214/1214 [==============================] - 13s 10ms/step - loss: 0.0220 - mae: 0.1097
Epoch 10/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0212 - mae: 0.1080
Epoch 10/30
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0178 - mae: 0.0963
190/1214 [===>..........................] - ETA: 11s - loss: 0.0207 - mae: 0.1070Epoch 11/30
1214/1214 [==============================] - 13s 10ms/step - loss: 0.0175 - mae: 0.0954
Epoch 11/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0179 - mae: 0.0977
Epoch 11/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0177 - mae: 0.0944
959/1214 [======================>.......] - ETA: 2s - loss: 0.0217 - mae: 0.1092Epoch 11/30
1214/1214 [==============================] - 13s 10ms/step - loss: 0.0261 - mae: 0.1207
766/1214 [=================>............] - ETA: 4s - loss: 0.0209 - mae: 0.1092Epoch 7/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0199 - mae: 0.1026
Epoch 11/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0213 - mae: 0.1080
Epoch 11/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0212 - mae: 0.1093
1033/1214 [========================>.....] - ETA: 1s - loss: 0.0175 - mae: 0.0955Epoch 11/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0178 - mae: 0.0964
Epoch 12/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0171 - mae: 0.0944
Epoch 12/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0174 - mae: 0.0960
Epoch 12/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0173 - mae: 0.0933
Epoch 12/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0245 - mae: 0.1164
Epoch 8/30
1214/1214 [==============================] - 14s 12ms/step - loss: 0.0194 - mae: 0.1018
Epoch 12/30
1214/1214 [==============================] - 14s 12ms/step - loss: 0.0198 - mae: 0.1041
Epoch 12/30
1214/1214 [==============================] - 14s 12ms/step - loss: 0.0194 - mae: 0.1034
Epoch 12/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0174 - mae: 0.0953
706/1214 [================>.............] - ETA: 5s - loss: 0.0177 - mae: 0.0958Epoch 13/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0169 - mae: 0.0936
752/1214 [=================>............] - ETA: 5s - loss: 0.0191 - mae: 0.1009Epoch 13/30
1214/1214 [==============================] - 14s 12ms/step - loss: 0.0172 - mae: 0.0953
Epoch 13/30
1214/1214 [==============================] - 14s 12ms/step - loss: 0.0170 - mae: 0.0927
Epoch 13/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0226 - mae: 0.1114
42/1214 [>.............................] - ETA: 12s - loss: 0.0129 - mae: 0.0873Epoch 9/30
1214/1214 [==============================] - 14s 12ms/step - loss: 0.0199 - mae: 0.1025
Epoch 13/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0193 - mae: 0.1025
Epoch 13/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0185 - mae: 0.1002
Epoch 13/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0174 - mae: 0.0956
Epoch 14/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0166 - mae: 0.0928
Epoch 14/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0167 - mae: 0.0915
1169/1214 [===========================>..] - ETA: 0s - loss: 0.0230 - mae: 0.1122Epoch 14/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0165 - mae: 0.0928
Epoch 14/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0231 - mae: 0.1125
717/1214 [================>.............] - ETA: 5s - loss: 0.0183 - mae: 0.1001Epoch 10/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0191 - mae: 0.1004
Epoch 14/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0194 - mae: 0.1031
247/1214 [=====>........................] - ETA: 10s - loss: 0.0159 - mae: 0.0912Epoch 14/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0180 - mae: 0.0991
282/1214 [=====>........................] - ETA: 10s - loss: 0.0182 - mae: 0.1002Epoch 14/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0170 - mae: 0.0940
Epoch 15/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0163 - mae: 0.0919
Epoch 15/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0165 - mae: 0.0914
Epoch 15/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0164 - mae: 0.0925
Epoch 15/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0208 - mae: 0.1069
Epoch 11/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0188 - mae: 0.0996
Epoch 15/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0189 - mae: 0.1010
Epoch 15/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0179 - mae: 0.0999
502/1214 [===========>..................] - ETA: 7s - loss: 0.0204 - mae: 0.1052Epoch 15/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0166 - mae: 0.0930
446/1214 [==========>...................] - ETA: 8s - loss: 0.0184 - mae: 0.0992Epoch 16/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0161 - mae: 0.0910
Epoch 16/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0163 - mae: 0.0926
Epoch 16/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0163 - mae: 0.0908
Epoch 16/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0204 - mae: 0.1059
569/1214 [=============>................] - ETA: 6s - loss: 0.0170 - mae: 0.0922Epoch 12/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0186 - mae: 0.0999
748/1214 [=================>............] - ETA: 4s - loss: 0.0167 - mae: 0.0928Epoch 16/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0183 - mae: 0.0985
Epoch 16/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0182 - mae: 0.0999
Epoch 16/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0167 - mae: 0.0929
Epoch 17/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0160 - mae: 0.0913
483/1214 [==========>...................] - ETA: 8s - loss: 0.0163 - mae: 0.0956Epoch 17/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0160 - mae: 0.0912
Epoch 17/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0163 - mae: 0.0908
982/1214 [=======================>......] - ETA: 2s - loss: 0.0183 - mae: 0.1000Epoch 17/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0195 - mae: 0.1033
16/1214 [..............................] - ETA: 12s - loss: 0.0141 - mae: 0.0939Epoch 13/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0181 - mae: 0.0994
900/1214 [=====================>........] - ETA: 3s - loss: 0.0175 - mae: 0.0994Epoch 17/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0184 - mae: 0.0997
23/1214 [..............................] - ETA: 12s - loss: 0.0142 - mae: 0.0933Epoch 17/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0176 - mae: 0.0990
Epoch 17/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0167 - mae: 0.0930
880/1214 [====================>.........] - ETA: 3s - loss: 0.0166 - mae: 0.0914Epoch 18/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0158 - mae: 0.0904
Epoch 18/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0162 - mae: 0.0922
Epoch 18/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0196 - mae: 0.1031
Epoch 14/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0166 - mae: 0.0918
1016/1214 [========================>.....] - ETA: 2s - loss: 0.0173 - mae: 0.0970Epoch 18/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0176 - mae: 0.0981
Epoch 18/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0182 - mae: 0.0994
Epoch 18/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0167 - mae: 0.0952
Epoch 18/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0167 - mae: 0.0928
Epoch 19/30
1214/1214 [==============================] - 14s 12ms/step - loss: 0.0157 - mae: 0.0906
Epoch 19/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0186 - mae: 0.1012
Epoch 15/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0156 - mae: 0.0904
Epoch 19/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0158 - mae: 0.0895
Epoch 19/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0179 - mae: 0.0990
Epoch 19/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0177 - mae: 0.0981
Epoch 19/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0167 - mae: 0.0953
735/1214 [=================>............] - ETA: 4s - loss: 0.0146 - mae: 0.0890Epoch 19/30
1214/1214 [==============================] - 13s 10ms/step - loss: 0.0161 - mae: 0.0911
Epoch 20/30
1214/1214 [==============================] - 13s 10ms/step - loss: 0.0157 - mae: 0.0903
Epoch 20/30
1214/1214 [==============================] - 13s 10ms/step - loss: 0.0190 - mae: 0.1013
1191/1214 [============================>.] - ETA: 0s - loss: 0.0156 - mae: 0.0903Epoch 16/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0159 - mae: 0.0895
14/1214 [..............................] - ETA: 14s - loss: 0.0153 - mae: 0.0973Epoch 20/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0156 - mae: 0.0904
Epoch 20/30
1214/1214 [==============================] - 13s 10ms/step - loss: 0.0177 - mae: 0.0974
Epoch 20/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0175 - mae: 0.0972
261/1214 [=====>........................] - ETA: 9s - loss: 0.0177 - mae: 0.0963 Epoch 20/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0166 - mae: 0.0962
Epoch 20/30
1214/1214 [==============================] - 13s 10ms/step - loss: 0.0162 - mae: 0.0919
Epoch 21/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0155 - mae: 0.0900
Epoch 21/30
1214/1214 [==============================] - 13s 10ms/step - loss: 0.0180 - mae: 0.0992
Epoch 17/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0155 - mae: 0.0882
1207/1214 [============================>.] - ETA: 0s - loss: 0.0159 - mae: 0.0912Epoch 21/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0158 - mae: 0.0912
Epoch 21/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0177 - mae: 0.0976
782/1214 [==================>...........] - ETA: 4s - loss: 0.0155 - mae: 0.0900Epoch 21/30
1214/1214 [==============================] - 13s 10ms/step - loss: 0.0175 - mae: 0.0971
390/1214 [========>.....................] - ETA: 8s - loss: 0.0143 - mae: 0.0884Epoch 21/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0163 - mae: 0.0945
1161/1214 [===========================>..] - ETA: 0s - loss: 0.0159 - mae: 0.0908Epoch 21/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0160 - mae: 0.0909
Epoch 22/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0153 - mae: 0.0895
1059/1214 [=========================>....] - ETA: 1s - loss: 0.0155 - mae: 0.0895Epoch 22/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0182 - mae: 0.0998
Epoch 18/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0160 - mae: 0.0902
545/1214 [============>.................] - ETA: 7s - loss: 0.0156 - mae: 0.0906Epoch 22/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0155 - mae: 0.0900
Epoch 22/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0167 - mae: 0.0955
Epoch 22/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0173 - mae: 0.0975
Epoch 22/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0166 - mae: 0.0957
588/1214 [=============>................] - ETA: 6s - loss: 0.0151 - mae: 0.0888Epoch 22/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0158 - mae: 0.0902
401/1214 [========>.....................] - ETA: 8s - loss: 0.0163 - mae: 0.0951Epoch 23/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0153 - mae: 0.0890
1057/1214 [=========================>....] - ETA: 1s - loss: 0.0154 - mae: 0.0878Epoch 23/30
1214/1214 [==============================] - 14s 12ms/step - loss: 0.0176 - mae: 0.0979
1181/1214 [============================>.] - ETA: 0s - loss: 0.0154 - mae: 0.0879Epoch 19/30
1214/1214 [==============================] - 15s 12ms/step - loss: 0.0153 - mae: 0.0878
Epoch 23/30
1214/1214 [==============================] - 15s 12ms/step - loss: 0.0153 - mae: 0.0894
Epoch 23/30
1214/1214 [==============================] - 15s 12ms/step - loss: 0.0167 - mae: 0.0948
Epoch 23/30
1214/1214 [==============================] - 15s 12ms/step - loss: 0.0169 - mae: 0.0965
834/1214 [===================>..........] - ETA: 4s - loss: 0.0163 - mae: 0.0934Epoch 23/30
1214/1214 [==============================] - 15s 12ms/step - loss: 0.0165 - mae: 0.0948
Epoch 23/30
1214/1214 [==============================] - 15s 12ms/step - loss: 0.0158 - mae: 0.0901
Epoch 24/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0152 - mae: 0.0892
Epoch 24/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0171 - mae: 0.0966
Epoch 20/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0154 - mae: 0.0884
Epoch 24/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0154 - mae: 0.0898
Epoch 24/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0165 - mae: 0.0946
190/1214 [===>..........................] - ETA: 11s - loss: 0.0152 - mae: 0.0901Epoch 24/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0167 - mae: 0.0956
Epoch 24/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0159 - mae: 0.0934
760/1214 [=================>............] - ETA: 5s - loss: 0.0146 - mae: 0.0878Epoch 24/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0158 - mae: 0.0900
439/1214 [=========>....................] - ETA: 8s - loss: 0.0168 - mae: 0.0952Epoch 25/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0150 - mae: 0.0881
1036/1214 [========================>.....] - ETA: 2s - loss: 0.0154 - mae: 0.0901Epoch 25/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0175 - mae: 0.0981
Epoch 21/30
1214/1214 [==============================] - 14s 12ms/step - loss: 0.0154 - mae: 0.0878
Epoch 25/30
1214/1214 [==============================] - 14s 12ms/step - loss: 0.0153 - mae: 0.0896
Epoch 25/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0168 - mae: 0.0956
Epoch 25/30
1214/1214 [==============================] - 14s 12ms/step - loss: 0.0172 - mae: 0.0965
399/1214 [========>.....................] - ETA: 9s - loss: 0.0145 - mae: 0.0880Epoch 25/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0157 - mae: 0.0932
Epoch 25/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0154 - mae: 0.0893
Epoch 26/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0150 - mae: 0.0880
830/1214 [===================>..........] - ETA: 4s - loss: 0.0174 - mae: 0.0970Epoch 26/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0168 - mae: 0.0962
Epoch 22/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0151 - mae: 0.0886
Epoch 26/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0151 - mae: 0.0874
Epoch 26/30
1214/1214 [==============================] - 14s 12ms/step - loss: 0.0167 - mae: 0.0957
Epoch 26/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0169 - mae: 0.0962
Epoch 26/30
1214/1214 [==============================] - 16s 13ms/step - loss: 0.0161 - mae: 0.0941
Epoch 26/30
1214/1214 [==============================] - 16s 13ms/step - loss: 0.0153 - mae: 0.0886
Epoch 27/30
1214/1214 [==============================] - 16s 13ms/step - loss: 0.0150 - mae: 0.0884
432/1214 [=========>....................] - ETA: 9s - loss: 0.0158 - mae: 0.0937Epoch 27/30
1214/1214 [==============================] - 16s 13ms/step - loss: 0.0166 - mae: 0.0958
Epoch 23/30
1214/1214 [==============================] - 16s 13ms/step - loss: 0.0151 - mae: 0.0869
198/1214 [===>..........................] - ETA: 10s - loss: 0.0143 - mae: 0.0862Epoch 27/30
1214/1214 [==============================] - 16s 13ms/step - loss: 0.0152 - mae: 0.0895
1017/1214 [========================>.....] - ETA: 2s - loss: 0.0163 - mae: 0.0944Epoch 27/30
1214/1214 [==============================] - 15s 12ms/step - loss: 0.0166 - mae: 0.0948
Epoch 27/30
1214/1214 [==============================] - 15s 13ms/step - loss: 0.0162 - mae: 0.0939
800/1214 [==================>...........] - ETA: 4s - loss: 0.0150 - mae: 0.0885Epoch 27/30
1214/1214 [==============================] - 14s 12ms/step - loss: 0.0163 - mae: 0.0943
Epoch 27/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0153 - mae: 0.0889
428/1214 [=========>....................] - ETA: 9s - loss: 0.0173 - mae: 0.0958Epoch 28/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0145 - mae: 0.0870
409/1214 [=========>....................] - ETA: 9s - loss: 0.0147 - mae: 0.0879Epoch 28/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0170 - mae: 0.0967
Epoch 24/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0151 - mae: 0.0872
Epoch 28/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0151 - mae: 0.0887
Epoch 28/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0167 - mae: 0.0953
155/1214 [==>...........................] - ETA: 11s - loss: 0.0148 - mae: 0.0912Epoch 28/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0167 - mae: 0.0950
766/1214 [=================>............] - ETA: 5s - loss: 0.0155 - mae: 0.0889Epoch 28/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0157 - mae: 0.0936
Epoch 28/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0153 - mae: 0.0889
Epoch 29/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0146 - mae: 0.0877
Epoch 29/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0164 - mae: 0.0951
860/1214 [====================>.........] - ETA: 4s - loss: 0.0169 - mae: 0.0958Epoch 25/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0151 - mae: 0.0887
Epoch 29/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0150 - mae: 0.0868
602/1214 [=============>................] - ETA: 6s - loss: 0.0157 - mae: 0.0892Epoch 29/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0165 - mae: 0.0943
167/1214 [===>..........................] - ETA: 11s - loss: 0.0123 - mae: 0.0839Epoch 29/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0166 - mae: 0.0949
Epoch 29/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0155 - mae: 0.0925
414/1214 [=========>....................] - ETA: 8s - loss: 0.0161 - mae: 0.0936Epoch 29/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0153 - mae: 0.0890
Epoch 30/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0146 - mae: 0.0875
Epoch 30/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0168 - mae: 0.0957
Epoch 26/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0149 - mae: 0.0881
Epoch 30/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0147 - mae: 0.0862
1061/1214 [=========================>....] - ETA: 1s - loss: 0.0163 - mae: 0.0945Epoch 30/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0156 - mae: 0.0921
Epoch 30/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0163 - mae: 0.0944
26/1214 [..............................] - ETA: 13s - loss: 0.0134 - mae: 0.0908Epoch 30/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0155 - mae: 0.0923
Epoch 30/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0148 - mae: 0.0868
135/135 [==============================] - 1s 6ms/step loss: 0.0148 - mae: 0.0820
111/1214 [=>............................] - ETA: 12s - loss: 0.0138 - mae: 0.0905[CV] END ............keras_model__model__learning_rate=0.001; total time= 6.6min
741/1214 [=================>............] - ETA: 5s - loss: 0.0146 - mae: 0.0863Epoch 1/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0160 - mae: 0.0932
Epoch 27/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0145 - mae: 0.0872
135/135 [==============================] - 1s 6ms/step loss: 1.7802 - mae: 0.6352
532/1214 [============>.................] - ETA: 7s - loss: 0.0161 - mae: 0.0939[CV] END ............keras_model__model__learning_rate=0.001; total time= 6.7min
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0148 - mae: 0.0865
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0147 - mae: 0.0875
253/1214 [=====>........................] - ETA: 10s - loss: 0.0163 - mae: 0.0944Epoch 1/30
135/135 [==============================] - 1s 8ms/step loss: 1.0410 - mae: 0.432
107/135 [======================>.......] - ETA: 0s[CV] END ............keras_model__model__learning_rate=0.001; total time= 6.7min
135/135 [==============================] - 1s 6ms/step loss: 0.0164 - mae: 0.09
757/1214 [=================>............] - ETA: 4s - loss: 0.0162 - mae: 0.0939[CV] END ............keras_model__model__learning_rate=0.001; total time= 6.7min
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0158 - mae: 0.0925
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0163 - mae: 0.0942
37/135 [=======>......................] - ETA: 0sEpoch 1/30 0.8764 - mae: 0.385568
11/1214 [..............................] - ETA: 12s - loss: 26.3553 - mae: 4.4648Epoch 1/30
135/135 [==============================] - 1s 5ms/step- loss: 8.0167 - mae: 1.917
63/135 [=============>................] - ETA: 0s[CV] END .............keras_model__model__learning_rate=0.01; total time= 6.7min
135/135 [==============================] - 1s 6ms/step loss: 0.7424 - mae: 0.34570
934/1214 [======================>.......] - ETA: 3s - loss: 0.0158 - mae: 0.0933[CV] END .............keras_model__model__learning_rate=0.01; total time= 6.7min
99/1214 [=>............................] - ETA: 11s - loss: 3.8269 - mae: 1.1452Epoch 1/30
795/1214 [==================>...........] - ETA: 4s - loss: 0.6227 - mae: 0.3132Epoch 1/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0157 - mae: 0.0928
135/135 [==============================] - 1s 7ms/step loss: 0.4658 - mae: 0.2687
422/1214 [=========>....................] - ETA: 8s - loss: 1.1706 - mae: 0.4719[CV] END .............keras_model__model__learning_rate=0.01; total time= 6.7min
1053/1214 [=========================>....] - ETA: 1s - loss: 0.0163 - mae: 0.0955Epoch 1/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.4192 - mae: 0.2508
Epoch 2/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0164 - mae: 0.0950
715/1214 [================>.............] - ETA: 5s - loss: 0.6061 - mae: 0.3301Epoch 28/30
1214/1214 [==============================] - 15s 11ms/step - loss: 0.4198 - mae: 0.2557
Epoch 2/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.3704 - mae: 0.2492
Epoch 2/30
1214/1214 [==============================] - 15s 11ms/step - loss: 0.4316 - mae: 0.2561
Epoch 2/30
1214/1214 [==============================] - 15s 11ms/step - loss: 0.4105 - mae: 0.2549
228/1214 [====>.........................] - ETA: 11s - loss: 0.0333 - mae: 0.1337Epoch 2/30
1214/1214 [==============================] - 15s 11ms/step - loss: 0.3690 - mae: 0.2446
Epoch 2/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0306 - mae: 0.1296
Epoch 3/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0158 - mae: 0.0932
Epoch 29/30
1214/1214 [==============================] - 15s 12ms/step - loss: 0.5136 - mae: 0.2674
658/1214 [===============>..............] - ETA: 6s - loss: 0.0301 - mae: 0.1273Epoch 2/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0302 - mae: 0.1301
446/1214 [==========>...................] - ETA: 8s - loss: 0.0156 - mae: 0.0926Epoch 3/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0315 - mae: 0.1328
918/1214 [=====================>........] - ETA: 3s - loss: 0.0303 - mae: 0.1295Epoch 3/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0298 - mae: 0.1284
Epoch 3/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0312 - mae: 0.1320
747/1214 [=================>............] - ETA: 5s - loss: 0.0155 - mae: 0.0926Epoch 3/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0301 - mae: 0.1299
387/1214 [========>.....................] - ETA: 9s - loss: 0.0293 - mae: 0.1287Epoch 3/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0303 - mae: 0.1297
522/1214 [===========>..................] - ETA: 7s - loss: 0.0299 - mae: 0.1311Epoch 4/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0158 - mae: 0.0930
1199/1214 [============================>.] - ETA: 0s - loss: 0.0367 - mae: 0.1433Epoch 30/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0367 - mae: 0.1434
784/1214 [==================>...........] - ETA: 4s - loss: 0.0293 - mae: 0.1305Epoch 3/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0304 - mae: 0.1318
Epoch 4/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0309 - mae: 0.1316
Epoch 4/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0316 - mae: 0.1331
Epoch 4/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0311 - mae: 0.1318
Epoch 4/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0308 - mae: 0.1320
Epoch 4/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0320 - mae: 0.1330
531/1214 [============>.................] - ETA: 7s - loss: 0.0314 - mae: 0.1327Epoch 5/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0159 - mae: 0.0934
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0313 - mae: 0.1319
Epoch 4/30
135/135 [==============================] - 1s 5ms/step- loss: 0.0268 - mae: 0.119
61/1214 [>.............................] - ETA: 11s - loss: 0.0282 - mae: 0.1222[CV] END .............keras_model__model__learning_rate=0.01; total time= 6.7min
272/1214 [=====>........................] - ETA: 9s - loss: 0.0245 - mae: 0.1185Epoch 1/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0300 - mae: 0.1308
Epoch 5/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0286 - mae: 0.1266
Epoch 5/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0290 - mae: 0.1269
Epoch 5/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0295 - mae: 0.1283
Epoch 5/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0280 - mae: 0.1256
821/1214 [===================>..........] - ETA: 4s - loss: 0.0295 - mae: 0.1283Epoch 5/30
1214/1214 [==============================] - 13s 10ms/step - loss: 0.0264 - mae: 0.1206
Epoch 6/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0305 - mae: 0.1296
521/1214 [===========>..................] - ETA: 7s - loss: 0.0270 - mae: 0.1252Epoch 5/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.5597 - mae: 0.2924
1064/1214 [=========================>....] - ETA: 1s - loss: 0.0276 - mae: 0.1250Epoch 2/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0272 - mae: 0.1238
1113/1214 [==========================>...] - ETA: 1s - loss: 0.0270 - mae: 0.1217Epoch 6/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0265 - mae: 0.1210
1173/1214 [===========================>..] - ETA: 0s - loss: 0.0262 - mae: 0.1200Epoch 6/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0262 - mae: 0.1203
Epoch 6/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0271 - mae: 0.1233
291/1214 [======>.......................] - ETA: 10s - loss: 0.0229 - mae: 0.1156Epoch 6/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0261 - mae: 0.1209
302/1214 [======>.......................] - ETA: 9s - loss: 0.0245 - mae: 0.1152Epoch 6/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0252 - mae: 0.1176
Epoch 7/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0311 - mae: 0.1306
Epoch 6/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0369 - mae: 0.1440
920/1214 [=====================>........] - ETA: 3s - loss: 0.0265 - mae: 0.1208Epoch 3/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0246 - mae: 0.1179
Epoch 7/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0234 - mae: 0.1137
250/1214 [=====>........................] - ETA: 10s - loss: 0.0311 - mae: 0.1319Epoch 7/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0260 - mae: 0.1195
532/1214 [============>.................] - ETA: 7s - loss: 0.0312 - mae: 0.1311Epoch 7/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0264 - mae: 0.1213
Epoch 7/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0253 - mae: 0.1196
Epoch 7/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0250 - mae: 0.1173
1015/1214 [========================>.....] - ETA: 2s - loss: 0.0301 - mae: 0.1281Epoch 8/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0301 - mae: 0.1280
545/1214 [============>.................] - ETA: 7s - loss: 0.0241 - mae: 0.1136Epoch 7/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0312 - mae: 0.1321
Epoch 4/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0240 - mae: 0.1171
Epoch 8/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0235 - mae: 0.1130
129/1214 [==>...........................] - ETA: 11s - loss: 0.0230 - mae: 0.1166Epoch 8/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0234 - mae: 0.1132
Epoch 8/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0232 - mae: 0.1129
158/1214 [==>...........................] - ETA: 10s - loss: 0.0203 - mae: 0.1107Epoch 8/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0235 - mae: 0.1136
578/1214 [=============>................] - ETA: 6s - loss: 0.0277 - mae: 0.1240Epoch 8/30
1214/1214 [==============================] - 13s 10ms/step - loss: 0.0225 - mae: 0.1115
755/1214 [=================>............] - ETA: 4s - loss: 0.0272 - mae: 0.1235Epoch 9/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0308 - mae: 0.1288
961/1214 [======================>.......] - ETA: 2s - loss: 0.0278 - mae: 0.1244Epoch 8/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0289 - mae: 0.1272
Epoch 5/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0226 - mae: 0.1127
125/1214 [==>...........................] - ETA: 14s - loss: 0.0306 - mae: 0.1283Epoch 9/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0222 - mae: 0.1098
154/1214 [==>...........................] - ETA: 11s - loss: 0.0226 - mae: 0.1094Epoch 9/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0228 - mae: 0.1113
Epoch 9/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0215 - mae: 0.1089
324/1214 [=======>......................] - ETA: 9s - loss: 0.0228 - mae: 0.1110Epoch 9/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0231 - mae: 0.1132
Epoch 9/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0229 - mae: 0.1125
Epoch 10/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0300 - mae: 0.1284
Epoch 9/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0284 - mae: 0.1251
249/1214 [=====>........................] - ETA: 10s - loss: 0.0322 - mae: 0.1351Epoch 6/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0212 - mae: 0.1085
Epoch 10/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0209 - mae: 0.1063
1037/1214 [========================>.....] - ETA: 1s - loss: 0.0223 - mae: 0.1093Epoch 10/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0221 - mae: 0.1088
Epoch 10/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0215 - mae: 0.1076
Epoch 10/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0224 - mae: 0.1109
Epoch 10/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0214 - mae: 0.1076
Epoch 11/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0361 - mae: 0.1415
Epoch 10/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0292 - mae: 0.1271
Epoch 7/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0208 - mae: 0.1079
813/1214 [===================>..........] - ETA: 4s - loss: 0.0212 - mae: 0.1075Epoch 11/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0200 - mae: 0.1031
Epoch 11/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0215 - mae: 0.1071
794/1214 [==================>...........] - ETA: 4s - loss: 0.0198 - mae: 0.1051Epoch 11/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0199 - mae: 0.1047
Epoch 11/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0212 - mae: 0.1079
1041/1214 [========================>.....] - ETA: 1s - loss: 0.0204 - mae: 0.1059Epoch 11/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0204 - mae: 0.1063
Epoch 12/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0326 - mae: 0.1342
724/1214 [================>.............] - ETA: 5s - loss: 0.0195 - mae: 0.1014Epoch 11/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0308 - mae: 0.1305
Epoch 8/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0207 - mae: 0.1067
Epoch 12/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0200 - mae: 0.1027
Epoch 12/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0197 - mae: 0.1030
290/1214 [======>.......................] - ETA: 10s - loss: 0.0292 - mae: 0.1272Epoch 12/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0204 - mae: 0.1050
Epoch 12/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0211 - mae: 0.1074
1056/1214 [=========================>....] - ETA: 1s - loss: 0.0198 - mae: 0.1042Epoch 12/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0198 - mae: 0.1038
Epoch 13/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0315 - mae: 0.1316
Epoch 12/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0313 - mae: 0.1306
Epoch 9/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0201 - mae: 0.1059
624/1214 [==============>...............] - ETA: 6s - loss: 0.0200 - mae: 0.1031Epoch 13/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0195 - mae: 0.1016
Epoch 13/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0195 - mae: 0.1016
1111/1214 [==========================>...] - ETA: 1s - loss: 0.0184 - mae: 0.0992Epoch 13/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0182 - mae: 0.0990
Epoch 13/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0209 - mae: 0.1076
590/1214 [=============>................] - ETA: 6s - loss: 0.0293 - mae: 0.1267Epoch 13/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0196 - mae: 0.1033
Epoch 14/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0329 - mae: 0.1352
Epoch 13/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0303 - mae: 0.1290
Epoch 10/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0190 - mae: 0.1022
949/1214 [======================>.......] - ETA: 2s - loss: 0.0184 - mae: 0.0999Epoch 14/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0199 - mae: 0.1031
Epoch 14/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0191 - mae: 0.1003
Epoch 14/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0182 - mae: 0.0992
Epoch 14/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0193 - mae: 0.1024
Epoch 14/30
1214/1214 [==============================] - 14s 12ms/step - loss: 0.0192 - mae: 0.1023
Epoch 15/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0328 - mae: 0.1342
Epoch 14/30
1214/1214 [==============================] - 14s 12ms/step - loss: 0.0298 - mae: 0.1284
Epoch 11/30
1214/1214 [==============================] - 14s 12ms/step - loss: 0.0190 - mae: 0.1026
Epoch 15/30
1214/1214 [==============================] - 14s 12ms/step - loss: 0.0188 - mae: 0.1002
1135/1214 [===========================>..] - ETA: 0s - loss: 0.0191 - mae: 0.1002Epoch 15/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0189 - mae: 0.1001
942/1214 [======================>.......] - ETA: 3s - loss: 0.0188 - mae: 0.1025Epoch 15/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0184 - mae: 0.0993
Epoch 15/30
1214/1214 [==============================] - 14s 12ms/step - loss: 0.0192 - mae: 0.1028
Epoch 15/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0182 - mae: 0.0994
Epoch 16/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0332 - mae: 0.1356
564/1214 [============>.................] - ETA: 7s - loss: 0.0174 - mae: 0.0985Epoch 15/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0539 - mae: 0.1562
241/1214 [====>.........................] - ETA: 10s - loss: 0.0304 - mae: 0.1323Epoch 12/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0174 - mae: 0.0977
184/1214 [===>..........................] - ETA: 11s - loss: 0.0521 - mae: 0.1697Epoch 16/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0183 - mae: 0.0989
Epoch 16/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0184 - mae: 0.0987
821/1214 [===================>..........] - ETA: 4s - loss: 0.0178 - mae: 0.0977Epoch 16/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0184 - mae: 0.0994
909/1214 [=====================>........] - ETA: 3s - loss: 0.0183 - mae: 0.0984Epoch 16/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0191 - mae: 0.1014
Epoch 16/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0180 - mae: 0.0983
967/1214 [======================>.......] - ETA: 2s - loss: 0.0353 - mae: 0.1394Epoch 17/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0349 - mae: 0.1386
Epoch 16/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0390 - mae: 0.1470
Epoch 13/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0182 - mae: 0.1001
655/1214 [===============>..............] - ETA: 6s - loss: 0.0181 - mae: 0.1003Epoch 17/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0192 - mae: 0.1016
Epoch 17/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0178 - mae: 0.0972
Epoch 17/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0171 - mae: 0.0961
235/1214 [====>.........................] - ETA: 11s - loss: 0.0152 - mae: 0.0932Epoch 17/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0177 - mae: 0.0984
Epoch 17/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0186 - mae: 0.1004
Epoch 18/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0338 - mae: 0.1356
384/1214 [========>.....................] - ETA: 8s - loss: 0.0178 - mae: 0.0991Epoch 17/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0334 - mae: 0.1353
Epoch 14/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0170 - mae: 0.0965
198/1214 [===>..........................] - ETA: 11s - loss: 0.0340 - mae: 0.1394Epoch 18/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0183 - mae: 0.0983
769/1214 [==================>...........] - ETA: 4s - loss: 0.0185 - mae: 0.0994Epoch 18/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0174 - mae: 0.0964
Epoch 18/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0171 - mae: 0.0961
Epoch 18/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0176 - mae: 0.0984
Epoch 18/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0177 - mae: 0.0980
Epoch 19/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0316 - mae: 0.1320
Epoch 18/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0323 - mae: 0.1337
Epoch 15/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0169 - mae: 0.0967
1092/1214 [=========================>....] - ETA: 1s - loss: 0.0187 - mae: 0.1007Epoch 19/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0186 - mae: 0.1004
813/1214 [===================>..........] - ETA: 4s - loss: 0.0168 - mae: 0.0962Epoch 19/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0172 - mae: 0.0963
Epoch 19/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0170 - mae: 0.0954
Epoch 19/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0174 - mae: 0.0971
Epoch 19/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0169 - mae: 0.0962
Epoch 20/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0327 - mae: 0.1343
723/1214 [================>.............] - ETA: 5s - loss: 0.0177 - mae: 0.0972Epoch 19/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0294 - mae: 0.1271
Epoch 16/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0168 - mae: 0.0961
Epoch 20/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0177 - mae: 0.0978
1122/1214 [==========================>...] - ETA: 1s - loss: 0.0167 - mae: 0.0951Epoch 20/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0168 - mae: 0.0949
Epoch 20/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0172 - mae: 0.0958
217/1214 [====>.........................] - ETA: 11s - loss: 0.0169 - mae: 0.0975Epoch 20/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0172 - mae: 0.0959
686/1214 [===============>..............] - ETA: 5s - loss: 0.0370 - mae: 0.1413Epoch 20/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0180 - mae: 0.0979
Epoch 21/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0342 - mae: 0.1375
Epoch 20/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0352 - mae: 0.1397
Epoch 17/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0172 - mae: 0.0969
Epoch 21/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0179 - mae: 0.0976
Epoch 21/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0168 - mae: 0.0947
Epoch 21/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0171 - mae: 0.0959
217/1214 [====>.........................] - ETA: 10s - loss: 0.0151 - mae: 0.0923Epoch 21/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0170 - mae: 0.0960
716/1214 [================>.............] - ETA: 5s - loss: 0.0300 - mae: 0.1303Epoch 21/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0168 - mae: 0.0954
Epoch 22/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0316 - mae: 0.1318
1082/1214 [=========================>....] - ETA: 1s - loss: 0.0293 - mae: 0.1279Epoch 21/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0300 - mae: 0.1282
984/1214 [=======================>......] - ETA: 2s - loss: 0.0173 - mae: 0.0962Epoch 18/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0166 - mae: 0.0952
Epoch 22/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0178 - mae: 0.0979
Epoch 22/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0167 - mae: 0.0950
Epoch 22/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0165 - mae: 0.0942
Epoch 22/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0168 - mae: 0.0954
Epoch 23/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0162 - mae: 0.0941
Epoch 22/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0342 - mae: 0.1374
Epoch 22/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0338 - mae: 0.1368
Epoch 19/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0164 - mae: 0.0948
Epoch 23/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0176 - mae: 0.0977
816/1214 [===================>..........] - ETA: 4s - loss: 0.0162 - mae: 0.0931Epoch 23/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0165 - mae: 0.0940
Epoch 23/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0166 - mae: 0.0950
Epoch 23/30
1214/1214 [==============================] - 14s 12ms/step - loss: 0.0161 - mae: 0.0933
Epoch 24/30
1214/1214 [==============================] - 14s 12ms/step - loss: 0.0164 - mae: 0.0936
Epoch 23/30
1214/1214 [==============================] - 14s 12ms/step - loss: 0.0341 - mae: 0.1370
347/1214 [=======>......................] - ETA: 10s - loss: 0.0162 - mae: 0.0913Epoch 23/30
1214/1214 [==============================] - 14s 12ms/step - loss: 0.0303 - mae: 0.1292
461/1214 [==========>...................] - ETA: 8s - loss: 0.0160 - mae: 0.0913Epoch 20/30
1214/1214 [==============================] - 14s 12ms/step - loss: 0.0165 - mae: 0.0946
378/1214 [========>.....................] - ETA: 9s - loss: 0.0299 - mae: 0.1315Epoch 24/30
1214/1214 [==============================] - 14s 12ms/step - loss: 0.0177 - mae: 0.0977
Epoch 24/30
1214/1214 [==============================] - 14s 12ms/step - loss: 0.0163 - mae: 0.0940
Epoch 24/30
1214/1214 [==============================] - 14s 12ms/step - loss: 0.0163 - mae: 0.0948
Epoch 24/30
1214/1214 [==============================] - 14s 12ms/step - loss: 0.0168 - mae: 0.0950
Epoch 25/30
1214/1214 [==============================] - 14s 12ms/step - loss: 0.0161 - mae: 0.0931
272/1214 [=====>........................] - ETA: 11s - loss: 0.0169 - mae: 0.0960Epoch 24/30
1214/1214 [==============================] - 14s 12ms/step - loss: 0.0311 - mae: 0.1305
Epoch 24/30
1214/1214 [==============================] - 14s 12ms/step - loss: 0.0332 - mae: 0.1353
Epoch 21/30
1214/1214 [==============================] - 14s 12ms/step - loss: 0.0160 - mae: 0.0934
Epoch 25/30
1214/1214 [==============================] - 14s 12ms/step - loss: 0.0173 - mae: 0.0966
Epoch 25/30
1214/1214 [==============================] - 14s 12ms/step - loss: 0.0160 - mae: 0.0928
1127/1214 [==========================>...] - ETA: 1s - loss: 0.0164 - mae: 0.0942Epoch 25/30
1214/1214 [==============================] - 14s 12ms/step - loss: 0.0163 - mae: 0.0941
Epoch 25/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0165 - mae: 0.0942
Epoch 26/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0164 - mae: 0.0941
Epoch 25/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0334 - mae: 0.1359
Epoch 25/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0339 - mae: 0.1378
Epoch 22/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0162 - mae: 0.0939
Epoch 26/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0171 - mae: 0.0957
341/1214 [=======>......................] - ETA: 9s - loss: 0.0323 - mae: 0.1350Epoch 26/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0164 - mae: 0.0942
68/1214 [>.............................] - ETA: 13s - loss: 0.0157 - mae: 0.0950Epoch 26/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0164 - mae: 0.0938
267/1214 [=====>........................] - ETA: 10s - loss: 0.0149 - mae: 0.0907Epoch 26/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0167 - mae: 0.0953
Epoch 27/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0163 - mae: 0.0935
864/1214 [====================>.........] - ETA: 3s - loss: 0.0324 - mae: 0.1343Epoch 26/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0333 - mae: 0.1355
355/1214 [=======>......................] - ETA: 9s - loss: 0.0145 - mae: 0.0914Epoch 26/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0315 - mae: 0.1317
796/1214 [==================>...........] - ETA: 4s - loss: 0.0161 - mae: 0.0935Epoch 23/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0158 - mae: 0.0930
Epoch 27/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0174 - mae: 0.0964
89/1214 [=>............................] - ETA: 12s - loss: 0.0150 - mae: 0.0921Epoch 27/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0160 - mae: 0.0933
Epoch 27/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0159 - mae: 0.0937
Epoch 27/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0164 - mae: 0.0944
Epoch 28/30
1214/1214 [==============================] - 14s 12ms/step - loss: 0.0155 - mae: 0.0918
Epoch 27/30
1214/1214 [==============================] - 14s 12ms/step - loss: 0.0325 - mae: 0.1343
1136/1214 [===========================>..] - ETA: 0s - loss: 0.0337 - mae: 0.1371Epoch 27/30
1214/1214 [==============================] - 15s 12ms/step - loss: 0.0335 - mae: 0.1371
Epoch 24/30
1214/1214 [==============================] - 14s 12ms/step - loss: 0.0159 - mae: 0.0929
666/1214 [===============>..............] - ETA: 6s - loss: 0.0170 - mae: 0.0958Epoch 28/30
1214/1214 [==============================] - 15s 12ms/step - loss: 0.0172 - mae: 0.0961
376/1214 [========>.....................] - ETA: 9s - loss: 0.0363 - mae: 0.1382Epoch 28/30
1214/1214 [==============================] - 15s 12ms/step - loss: 0.0163 - mae: 0.0936
40/1214 [..............................] - ETA: 14s - loss: 0.0192 - mae: 0.0991Epoch 28/30
1214/1214 [==============================] - 15s 12ms/step - loss: 0.0155 - mae: 0.0926
Epoch 28/30
1214/1214 [==============================] - 14s 12ms/step - loss: 0.0168 - mae: 0.0954
Epoch 29/30
1214/1214 [==============================] - 14s 12ms/step - loss: 0.0159 - mae: 0.0930
423/1214 [=========>....................] - ETA: 8s - loss: 0.0170 - mae: 0.0939Epoch 28/30
1214/1214 [==============================] - 14s 12ms/step - loss: 0.0328 - mae: 0.1353
765/1214 [=================>............] - ETA: 5s - loss: 0.0168 - mae: 0.0955Epoch 28/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0327 - mae: 0.1341
Epoch 25/30
1214/1214 [==============================] - 14s 12ms/step - loss: 0.0159 - mae: 0.0936
Epoch 29/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0169 - mae: 0.0955
Epoch 29/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0161 - mae: 0.0931
Epoch 29/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0158 - mae: 0.0926
942/1214 [======================>.......] - ETA: 3s - loss: 0.0155 - mae: 0.0929Epoch 29/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0163 - mae: 0.0937
Epoch 30/30
1214/1214 [==============================] - 14s 12ms/step - loss: 0.0159 - mae: 0.0924
Epoch 29/30
1214/1214 [==============================] - 14s 12ms/step - loss: 0.0319 - mae: 0.1328
774/1214 [==================>...........] - ETA: 5s - loss: 0.0180 - mae: 0.0966Epoch 29/30
1214/1214 [==============================] - 14s 12ms/step - loss: 0.0320 - mae: 0.1336
Epoch 26/30
1214/1214 [==============================] - 14s 12ms/step - loss: 0.0159 - mae: 0.0925
872/1214 [====================>.........] - ETA: 4s - loss: 0.0152 - mae: 0.0918Epoch 30/30
1214/1214 [==============================] - 14s 12ms/step - loss: 0.0171 - mae: 0.0959
Epoch 30/30
1214/1214 [==============================] - 14s 12ms/step - loss: 0.0157 - mae: 0.0922
1028/1214 [========================>.....] - ETA: 2s - loss: 0.0156 - mae: 0.0921Epoch 30/30
1214/1214 [==============================] - 14s 12ms/step - loss: 0.0154 - mae: 0.0919
983/1214 [=======================>......] - ETA: 2s - loss: 0.0173 - mae: 0.0954Epoch 30/30
1214/1214 [==============================] - 14s 12ms/step - loss: 0.0168 - mae: 0.0946
1214/1214 [==============================] - 14s 12ms/step - loss: 0.0157 - mae: 0.0921
931/1214 [======================>.......] - ETA: 3s - loss: 0.0331 - mae: 0.1335Epoch 30/30
135/135 [==============================] - 1s 6ms/step loss: 0.0159 - mae: 0.093
973/1214 [=======================>......] - ETA: 2s - loss: 0.0328 - mae: 0.1332[CV] END .............keras_model__model__learning_rate=0.01; total time= 6.8min
403/1214 [========>.....................] - ETA: 8s - loss: 0.0145 - mae: 0.0915Epoch 1/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0320 - mae: 0.1322
50/1214 [>.............................] - ETA: 14s - loss: 2.5736 - mae: 1.0007Epoch 30/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0331 - mae: 0.1356
Epoch 27/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0156 - mae: 0.0922
135/135 [==============================] - 1s 6ms/step loss: 0.0299 - mae: 0.138
694/1214 [================>.............] - ETA: 5s - loss: 0.0149 - mae: 0.0902[CV] END .............keras_model__model__learning_rate=0.01; total time= 6.8min
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0169 - mae: 0.0953
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0160 - mae: 0.0931
135/135 [==============================] - 1s 6ms/step loss: 0.0152 - mae: 0.093
135/135 [==============================] - 1s 6ms/step loss: 0.3078 - mae: 0.292
591/1214 [=============>................] - ETA: 6s - loss: 0.3045 - mae: 0.2915[CV] END .............keras_model__model__learning_rate=0.01; total time= 6.8min
543/1214 [============>.................] - ETA: 7s - loss: 0.0314 - mae: 0.1325[CV] END .............keras_model__model__learning_rate=0.01; total time= 6.8min
547/1214 [============>.................] - ETA: 7s - loss: 0.0313 - mae: 0.1323Epoch 1/30
649/1214 [===============>..............] - ETA: 6s - loss: 0.2845 - mae: 0.2843Epoch 1/30
558/1214 [============>.................] - ETA: 7s - loss: 0.0337 - mae: 0.1358Epoch 1/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0154 - mae: 0.0924
135/135 [==============================] - 1s 7ms/step loss: 0.2410 - mae: 0.26451
47/1214 [>.............................] - ETA: 12s - loss: 9.8631 - mae: 2.0132 [CV] END .............keras_model__model__learning_rate=0.01; total time= 6.8min
248/1214 [=====>........................] - ETA: 10s - loss: 0.8751 - mae: 0.4603Epoch 1/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0155 - mae: 0.0912
135/135 [==============================] - 1s 5ms/step loss: 0.5359 - mae: 0.3434
348/1214 [=======>......................] - ETA: 10s - loss: 1.4524 - mae: 0.4935[CV] END .............keras_model__model__learning_rate=0.01; total time= 6.9min
177/1214 [===>..........................] - ETA: 10s - loss: 1.2744 - mae: 0.5728Epoch 1/30
1214/1214 [==============================] - 15s 11ms/step - loss: 0.1719 - mae: 0.2252
Epoch 2/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0320 - mae: 0.1333
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0320 - mae: 0.1337
11/135 [=>............................] - ETA: 0s Epoch 28/303.9482 - mae: 1.261
135/135 [==============================] - 1s 6ms/step- loss: 0.0322 - mae: 0.137
619/1214 [==============>...............] - ETA: 6s - loss: 0.8405 - mae: 0.3538[CV] END ..............keras_model__model__learning_rate=0.1; total time= 6.8min
733/1214 [=================>............] - ETA: 5s - loss: 0.7193 - mae: 0.3275Epoch 1/30
1214/1214 [==============================] - 15s 12ms/step - loss: 0.2166 - mae: 0.2254
Epoch 2/30
1214/1214 [==============================] - 15s 12ms/step - loss: 0.4528 - mae: 0.2627
Epoch 2/30
1214/1214 [==============================] - 15s 12ms/step - loss: 0.2839 - mae: 0.2305
722/1214 [================>.............] - ETA: 5s - loss: 0.3090 - mae: 0.2694Epoch 2/30
1214/1214 [==============================] - 15s 11ms/step - loss: 0.2305 - mae: 0.2295
331/1214 [=======>......................] - ETA: 10s - loss: 0.0389 - mae: 0.1483Epoch 2/30
1214/1214 [==============================] - 14s 12ms/step - loss: 0.0368 - mae: 0.1443
Epoch 3/30
1214/1214 [==============================] - 15s 12ms/step - loss: 0.2028 - mae: 0.2245
1199/1214 [============================>.] - ETA: 0s - loss: 0.0318 - mae: 0.1338Epoch 2/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0320 - mae: 0.1340
899/1214 [=====================>........] - ETA: 3s - loss: 0.2197 - mae: 0.2566Epoch 29/30
1214/1214 [==============================] - 15s 12ms/step - loss: 0.1745 - mae: 0.2328
Epoch 2/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0347 - mae: 0.1387
1131/1214 [==========================>...] - ETA: 0s - loss: 0.0401 - mae: 0.1517Epoch 3/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0354 - mae: 0.1411
Epoch 3/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0399 - mae: 0.1517
695/1214 [================>.............] - ETA: 5s - loss: 0.0294 - mae: 0.1295Epoch 3/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0378 - mae: 0.1450
Epoch 3/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0301 - mae: 0.1300
Epoch 4/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0333 - mae: 0.1377
Epoch 3/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0314 - mae: 0.1326
Epoch 30/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0380 - mae: 0.1458
Epoch 3/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0304 - mae: 0.1299
Epoch 4/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0327 - mae: 0.1363
Epoch 4/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0307 - mae: 0.1305
Epoch 4/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0330 - mae: 0.1359
Epoch 4/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0295 - mae: 0.1278
570/1214 [=============>................] - ETA: 7s - loss: 0.0275 - mae: 0.1239Epoch 5/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0296 - mae: 0.1285
Epoch 4/30
1214/1214 [==============================] - 14s 12ms/step - loss: 0.0309 - mae: 0.1305
135/135 [==============================] - 1s 6ms/step loss: 0.0285 - mae: 0.1210
116/1214 [=>............................] - ETA: 13s - loss: 0.0285 - mae: 0.1239[CV] END ..............keras_model__model__learning_rate=0.1; total time= 6.8min
186/1214 [===>..........................] - ETA: 11s - loss: 0.0249 - mae: 0.1180Epoch 1/30
1214/1214 [==============================] - 14s 12ms/step - loss: 0.0284 - mae: 0.1241
857/1214 [====================>.........] - ETA: 4s - loss: 0.0307 - mae: 0.1306Epoch 4/30
1214/1214 [==============================] - 14s 12ms/step - loss: 0.0288 - mae: 0.1259
290/1214 [======>.......................] - ETA: 10s - loss: 0.8621 - mae: 0.4554Epoch 5/30
1214/1214 [==============================] - 14s 12ms/step - loss: 0.0287 - mae: 0.1261
Epoch 5/30
1214/1214 [==============================] - 14s 12ms/step - loss: 0.0305 - mae: 0.1303
Epoch 5/30
1214/1214 [==============================] - 14s 12ms/step - loss: 0.0303 - mae: 0.1297
250/1214 [=====>........................] - ETA: 11s - loss: 0.0304 - mae: 0.1308Epoch 5/30
1214/1214 [==============================] - 14s 12ms/step - loss: 0.0278 - mae: 0.1228
Epoch 6/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0303 - mae: 0.1300
841/1214 [===================>..........] - ETA: 4s - loss: 0.0286 - mae: 0.1268Epoch 5/30
1214/1214 [==============================] - 15s 12ms/step - loss: 0.2398 - mae: 0.2296
Epoch 2/30
1214/1214 [==============================] - 14s 12ms/step - loss: 0.0296 - mae: 0.1271
815/1214 [===================>..........] - ETA: 4s - loss: 0.0302 - mae: 0.1310Epoch 5/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0277 - mae: 0.1240
Epoch 6/30
1214/1214 [==============================] - 14s 12ms/step - loss: 0.0290 - mae: 0.1275
1186/1214 [============================>.] - ETA: 0s - loss: 0.0293 - mae: 0.1277Epoch 6/30
1214/1214 [==============================] - 14s 12ms/step - loss: 0.0292 - mae: 0.1276
Epoch 6/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0291 - mae: 0.1267
673/1214 [===============>..............] - ETA: 5s - loss: 0.0345 - mae: 0.1396Epoch 6/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0286 - mae: 0.1260
Epoch 7/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0286 - mae: 0.1266
Epoch 6/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0321 - mae: 0.1345
Epoch 3/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0290 - mae: 0.1261
821/1214 [===================>..........] - ETA: 4s - loss: 0.0279 - mae: 0.1238Epoch 6/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0286 - mae: 0.1259
160/1214 [==>...........................] - ETA: 11s - loss: 0.0377 - mae: 0.1426Epoch 7/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0281 - mae: 0.1257
Epoch 7/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0291 - mae: 0.1269
Epoch 7/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0278 - mae: 0.1236
700/1214 [================>.............] - ETA: 5s - loss: 0.0293 - mae: 0.1286Epoch 7/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0285 - mae: 0.1262
Epoch 8/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0302 - mae: 0.1304
Epoch 7/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0300 - mae: 0.1298
506/1214 [===========>..................] - ETA: 7s - loss: 0.0303 - mae: 0.1306Epoch 4/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0334 - mae: 0.1352
Epoch 7/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0285 - mae: 0.1259
Epoch 8/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0292 - mae: 0.1272
Epoch 8/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0296 - mae: 0.1280
Epoch 8/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0305 - mae: 0.1293
Epoch 8/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0316 - mae: 0.1331
Epoch 9/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0293 - mae: 0.1273
936/1214 [======================>.......] - ETA: 2s - loss: 0.0280 - mae: 0.1242Epoch 8/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0290 - mae: 0.1269
Epoch 5/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0295 - mae: 0.1276
168/1214 [===>..........................] - ETA: 12s - loss: 0.0226 - mae: 0.1151Epoch 8/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0305 - mae: 0.1302
Epoch 9/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0297 - mae: 0.1292
Epoch 9/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0318 - mae: 0.1322
Epoch 9/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0297 - mae: 0.1274
Epoch 9/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0300 - mae: 0.1294
686/1214 [===============>..............] - ETA: 6s - loss: 0.0326 - mae: 0.1329Epoch 10/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0291 - mae: 0.1270
Epoch 9/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0289 - mae: 0.1276
Epoch 6/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0305 - mae: 0.1289
1135/1214 [===========================>..] - ETA: 0s - loss: 0.0310 - mae: 0.1300Epoch 9/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0309 - mae: 0.1305
Epoch 10/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0339 - mae: 0.1374
967/1214 [======================>.......] - ETA: 2s - loss: 0.0285 - mae: 0.1255Epoch 10/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0294 - mae: 0.1271
Epoch 10/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0299 - mae: 0.1281
Epoch 10/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0296 - mae: 0.1289
Epoch 11/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0298 - mae: 0.1290
Epoch 10/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0282 - mae: 0.1255
706/1214 [================>.............] - ETA: 5s - loss: 0.0298 - mae: 0.1297Epoch 7/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0306 - mae: 0.1300
1120/1214 [==========================>...] - ETA: 1s - loss: 0.0285 - mae: 0.1265Epoch 10/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0293 - mae: 0.1275
569/1214 [=============>................] - ETA: 6s - loss: 0.0298 - mae: 0.1296Epoch 11/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0313 - mae: 0.1327
738/1214 [=================>............] - ETA: 5s - loss: 0.0305 - mae: 0.1310Epoch 11/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0315 - mae: 0.1317
Epoch 11/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0339 - mae: 0.1364
Epoch 11/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0317 - mae: 0.1335
Epoch 12/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0312 - mae: 0.1303
Epoch 11/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0302 - mae: 0.1306
976/1214 [=======================>......] - ETA: 2s - loss: 0.0297 - mae: 0.1300Epoch 8/30
1214/1214 [==============================] - 15s 12ms/step - loss: 0.0311 - mae: 0.1316
Epoch 11/30
1214/1214 [==============================] - 14s 12ms/step - loss: 0.0290 - mae: 0.1265
Epoch 12/30
1214/1214 [==============================] - 15s 12ms/step - loss: 0.0318 - mae: 0.1341
489/1214 [===========>..................] - ETA: 8s - loss: 0.0279 - mae: 0.1256Epoch 12/30
1214/1214 [==============================] - 14s 12ms/step - loss: 0.0325 - mae: 0.1340
44/1214 [>.............................] - ETA: 12s - loss: 0.0378 - mae: 0.1475Epoch 12/30
1214/1214 [==============================] - 15s 12ms/step - loss: 0.0306 - mae: 0.1306
1054/1214 [=========================>....] - ETA: 1s - loss: 0.0284 - mae: 0.1254Epoch 12/30
1214/1214 [==============================] - 15s 12ms/step - loss: 0.0309 - mae: 0.1315
881/1214 [====================>.........] - ETA: 4s - loss: 0.0309 - mae: 0.1311Epoch 13/30
1214/1214 [==============================] - 15s 12ms/step - loss: 0.0285 - mae: 0.1256
79/1214 [>.............................] - ETA: 13s - loss: 0.0509 - mae: 0.1722Epoch 12/30
1214/1214 [==============================] - 15s 12ms/step - loss: 0.0301 - mae: 0.1294
Epoch 9/30
1214/1214 [==============================] - 15s 12ms/step - loss: 0.0298 - mae: 0.1281
Epoch 12/30
1214/1214 [==============================] - 15s 12ms/step - loss: 0.0306 - mae: 0.1303
Epoch 13/30
1214/1214 [==============================] - 15s 12ms/step - loss: 0.0322 - mae: 0.1345
Epoch 13/30
1214/1214 [==============================] - 15s 12ms/step - loss: 0.0325 - mae: 0.1335
293/1214 [======>.......................] - ETA: 11s - loss: 0.0306 - mae: 0.1281Epoch 13/30
1214/1214 [==============================] - 15s 12ms/step - loss: 0.0309 - mae: 0.1312
Epoch 13/30
1214/1214 [==============================] - 15s 13ms/step - loss: 0.0332 - mae: 0.1355
384/1214 [========>.....................] - ETA: 10s - loss: 0.0323 - mae: 0.1350Epoch 14/30
1214/1214 [==============================] - 15s 13ms/step - loss: 0.0324 - mae: 0.1335
179/1214 [===>..........................] - ETA: 14s - loss: 0.0328 - mae: 0.1323Epoch 13/30
1214/1214 [==============================] - 15s 12ms/step - loss: 0.0299 - mae: 0.1300
Epoch 10/30
1214/1214 [==============================] - 15s 13ms/step - loss: 0.0303 - mae: 0.1294
Epoch 13/30
1214/1214 [==============================] - 15s 13ms/step - loss: 0.0322 - mae: 0.1340
Epoch 14/30
1214/1214 [==============================] - 16s 13ms/step - loss: 0.0311 - mae: 0.1322
Epoch 14/30
1214/1214 [==============================] - 16s 13ms/step - loss: 0.0334 - mae: 0.1359
Epoch 14/30
1214/1214 [==============================] - 15s 13ms/step - loss: 0.0336 - mae: 0.1370
Epoch 14/30
1214/1214 [==============================] - 15s 13ms/step - loss: 0.0310 - mae: 0.1318
1143/1214 [===========================>..] - ETA: 0s - loss: 0.0296 - mae: 0.1280Epoch 15/30
1214/1214 [==============================] - 15s 13ms/step - loss: 0.0296 - mae: 0.1280
659/1214 [===============>..............] - ETA: 6s - loss: 0.0286 - mae: 0.1256Epoch 14/30
1214/1214 [==============================] - 15s 12ms/step - loss: 0.0316 - mae: 0.1332
Epoch 11/30
1214/1214 [==============================] - 15s 12ms/step - loss: 0.0306 - mae: 0.1294
Epoch 14/30
1214/1214 [==============================] - 15s 12ms/step - loss: 0.0295 - mae: 0.1276
Epoch 15/30
1214/1214 [==============================] - 14s 12ms/step - loss: 0.0317 - mae: 0.1338
Epoch 15/30
1214/1214 [==============================] - 14s 12ms/step - loss: 0.0327 - mae: 0.1345
Epoch 15/30
1214/1214 [==============================] - 14s 12ms/step - loss: 0.0328 - mae: 0.1342
Epoch 15/30
1214/1214 [==============================] - 14s 12ms/step - loss: 0.0314 - mae: 0.1325
1171/1214 [===========================>..] - ETA: 0s - loss: 0.0315 - mae: 0.1322Epoch 16/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0312 - mae: 0.1318
Epoch 15/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0344 - mae: 0.1390
688/1214 [================>.............] - ETA: 5s - loss: 0.0323 - mae: 0.1356Epoch 12/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0323 - mae: 0.1346
Epoch 15/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0306 - mae: 0.1304
Epoch 16/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0319 - mae: 0.1335
509/1214 [===========>..................] - ETA: 7s - loss: 0.0303 - mae: 0.1314Epoch 16/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0324 - mae: 0.1344
Epoch 16/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0291 - mae: 0.1278
820/1214 [===================>..........] - ETA: 4s - loss: 0.0302 - mae: 0.1298Epoch 16/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0343 - mae: 0.1388
97/1214 [=>............................] - ETA: 13s - loss: 0.0393 - mae: 0.1466Epoch 17/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0295 - mae: 0.1281
467/1214 [==========>...................] - ETA: 8s - loss: 0.0368 - mae: 0.1439Epoch 16/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0318 - mae: 0.1328
Epoch 13/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0318 - mae: 0.1327
Epoch 16/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0317 - mae: 0.1325
Epoch 17/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0316 - mae: 0.1328
Epoch 17/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0321 - mae: 0.1334
863/1214 [====================>.........] - ETA: 3s - loss: 0.0356 - mae: 0.1426Epoch 17/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0311 - mae: 0.1312
Epoch 17/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0348 - mae: 0.1401
Epoch 18/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0319 - mae: 0.1329
Epoch 17/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0317 - mae: 0.1332
Epoch 14/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0326 - mae: 0.1344
Epoch 17/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0329 - mae: 0.1350
Epoch 18/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0304 - mae: 0.1308
1147/1214 [===========================>..] - ETA: 0s - loss: 0.0310 - mae: 0.1314Epoch 18/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0314 - mae: 0.1317
Epoch 18/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0310 - mae: 0.1320
Epoch 18/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0341 - mae: 0.1390
Epoch 19/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0333 - mae: 0.1361
400/1214 [========>.....................] - ETA: 8s - loss: 0.0306 - mae: 0.1291Epoch 18/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0335 - mae: 0.1362
669/1214 [===============>..............] - ETA: 5s - loss: 0.0316 - mae: 0.1307Epoch 15/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0313 - mae: 0.1318
530/1214 [============>.................] - ETA: 7s - loss: 0.0344 - mae: 0.1400Epoch 18/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0316 - mae: 0.1318
Epoch 19/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0321 - mae: 0.1345
Epoch 19/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0314 - mae: 0.1318
Epoch 19/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0305 - mae: 0.1300
Epoch 19/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0331 - mae: 0.1362
Epoch 20/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0324 - mae: 0.1343
Epoch 19/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0312 - mae: 0.1319
258/1214 [=====>........................] - ETA: 10s - loss: 0.0258 - mae: 0.1239Epoch 16/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0305 - mae: 0.1299
Epoch 19/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0314 - mae: 0.1317
92/1214 [=>............................] - ETA: 11s - loss: 0.0348 - mae: 0.1412Epoch 20/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0341 - mae: 0.1382
Epoch 20/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0321 - mae: 0.1324
Epoch 20/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0308 - mae: 0.1314
Epoch 20/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0321 - mae: 0.1337
599/1214 [=============>................] - ETA: 6s - loss: 0.0322 - mae: 0.1333Epoch 21/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0311 - mae: 0.1312
Epoch 20/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0329 - mae: 0.1350
Epoch 17/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0344 - mae: 0.1387
Epoch 20/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0331 - mae: 0.1355
Epoch 21/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0321 - mae: 0.1343
Epoch 21/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0342 - mae: 0.1383
861/1214 [====================>.........] - ETA: 3s - loss: 0.0321 - mae: 0.1328Epoch 21/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0315 - mae: 0.1327
Epoch 21/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0304 - mae: 0.1302
Epoch 22/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0331 - mae: 0.1360
976/1214 [=======================>......] - ETA: 2s - loss: 0.0316 - mae: 0.1334Epoch 21/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0320 - mae: 0.1338
Epoch 18/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0307 - mae: 0.1299
969/1214 [======================>.......] - ETA: 2s - loss: 0.0326 - mae: 0.1354Epoch 21/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0334 - mae: 0.1360
328/1214 [=======>......................] - ETA: 9s - loss: 0.0330 - mae: 0.1362Epoch 22/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0320 - mae: 0.1341
Epoch 22/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0328 - mae: 0.1350
244/1214 [=====>........................] - ETA: 10s - loss: 0.0328 - mae: 0.1337Epoch 22/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0312 - mae: 0.1318
Epoch 22/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0323 - mae: 0.1344
695/1214 [================>.............] - ETA: 5s - loss: 0.0322 - mae: 0.1331Epoch 23/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0310 - mae: 0.1307
140/1214 [==>...........................] - ETA: 11s - loss: 0.0312 - mae: 0.1315Epoch 22/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0329 - mae: 0.1353
Epoch 19/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0318 - mae: 0.1324
836/1214 [===================>..........] - ETA: 4s - loss: 0.0343 - mae: 0.1362Epoch 22/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0319 - mae: 0.1333
Epoch 23/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0324 - mae: 0.1353
Epoch 23/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0330 - mae: 0.1355
587/1214 [=============>................] - ETA: 6s - loss: 0.0325 - mae: 0.1347Epoch 23/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0309 - mae: 0.1307
Epoch 23/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0315 - mae: 0.1329
889/1214 [====================>.........] - ETA: 3s - loss: 0.0322 - mae: 0.1348Epoch 24/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0322 - mae: 0.1340
Epoch 23/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0324 - mae: 0.1343
Epoch 20/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0293 - mae: 0.1269
Epoch 23/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0329 - mae: 0.1354
677/1214 [===============>..............] - ETA: 5s - loss: 0.0305 - mae: 0.1292Epoch 24/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0328 - mae: 0.1360
442/1214 [=========>....................] - ETA: 8s - loss: 0.0322 - mae: 0.1358Epoch 24/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0323 - mae: 0.1342
942/1214 [======================>.......] - ETA: 2s - loss: 0.0306 - mae: 0.1296Epoch 24/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0307 - mae: 0.1300
Epoch 24/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0325 - mae: 0.1353
873/1214 [====================>.........] - ETA: 3s - loss: 0.0333 - mae: 0.1372Epoch 25/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0342 - mae: 0.1379
377/1214 [========>.....................] - ETA: 9s - loss: 0.0326 - mae: 0.1364Epoch 24/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0342 - mae: 0.1384
350/1214 [=======>......................] - ETA: 9s - loss: 0.0333 - mae: 0.1324Epoch 21/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0326 - mae: 0.1342
604/1214 [=============>................] - ETA: 6s - loss: 0.0304 - mae: 0.1304Epoch 24/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0310 - mae: 0.1313
294/1214 [======>.......................] - ETA: 9s - loss: 0.0404 - mae: 0.1471Epoch 25/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0313 - mae: 0.1327
826/1214 [===================>..........] - ETA: 4s - loss: 0.0581 - mae: 0.1549Epoch 25/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0335 - mae: 0.1368
351/1214 [=======>......................] - ETA: 9s - loss: 0.0296 - mae: 0.1278Epoch 25/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0507 - mae: 0.1495
Epoch 25/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0318 - mae: 0.1334
16/1214 [..............................] - ETA: 13s - loss: 0.0314 - mae: 0.1238Epoch 26/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0324 - mae: 0.1341
105/1214 [=>............................] - ETA: 11s - loss: 0.0352 - mae: 0.1412Epoch 25/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0333 - mae: 0.1367
Epoch 22/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0310 - mae: 0.1301
882/1214 [====================>.........] - ETA: 3s - loss: 0.0344 - mae: 0.1394Epoch 25/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0334 - mae: 0.1363
Epoch 26/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0339 - mae: 0.1382
Epoch 26/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0345 - mae: 0.1389
Epoch 26/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0318 - mae: 0.1335
616/1214 [==============>...............] - ETA: 6s - loss: 0.0290 - mae: 0.1263Epoch 27/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0332 - mae: 0.1367
Epoch 26/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0321 - mae: 0.1329
491/1214 [===========>..................] - ETA: 7s - loss: 0.0324 - mae: 0.1324Epoch 26/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0325 - mae: 0.1349
Epoch 23/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0320 - mae: 0.1332
1153/1214 [===========================>..] - ETA: 0s - loss: 0.0328 - mae: 0.1350Epoch 26/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0324 - mae: 0.1343
Epoch 27/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0309 - mae: 0.1319
Epoch 27/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0326 - mae: 0.1348
Epoch 27/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0317 - mae: 0.1338
Epoch 28/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0322 - mae: 0.1344
849/1214 [===================>..........] - ETA: 3s - loss: 0.0323 - mae: 0.1338Epoch 27/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0325 - mae: 0.1347
Epoch 27/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0321 - mae: 0.1342
Epoch 24/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0315 - mae: 0.1323
Epoch 27/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0350 - mae: 0.1397
53/1214 [>.............................] - ETA: 13s - loss: 0.0302 - mae: 0.1280Epoch 28/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0321 - mae: 0.1347
Epoch 28/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0345 - mae: 0.1381
273/1214 [=====>........................] - ETA: 10s - loss: 0.0327 - mae: 0.1323Epoch 28/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0314 - mae: 0.1328
Epoch 29/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0317 - mae: 0.1337
290/1214 [======>.......................] - ETA: 10s - loss: 0.0308 - mae: 0.1337Epoch 28/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0332 - mae: 0.1358
93/1214 [=>............................] - ETA: 13s - loss: 0.0261 - mae: 0.1265Epoch 28/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0335 - mae: 0.1373
773/1214 [==================>...........] - ETA: 5s - loss: 0.0312 - mae: 0.1330Epoch 25/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0296 - mae: 0.1279
873/1214 [====================>.........] - ETA: 3s - loss: 0.0328 - mae: 0.1344Epoch 28/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0320 - mae: 0.1325
924/1214 [=====================>........] - ETA: 3s - loss: 0.0325 - mae: 0.1336Epoch 29/30
1214/1214 [==============================] - 14s 12ms/step - loss: 0.0319 - mae: 0.1339
Epoch 29/30
1214/1214 [==============================] - 14s 12ms/step - loss: 0.0335 - mae: 0.1360
Epoch 29/30
1214/1214 [==============================] - 14s 12ms/step - loss: 0.0303 - mae: 0.1310
Epoch 30/30
1214/1214 [==============================] - 15s 12ms/step - loss: 0.0324 - mae: 0.1355
595/1214 [=============>................] - ETA: 7s - loss: 0.0327 - mae: 0.1339Epoch 29/30
1214/1214 [==============================] - 15s 12ms/step - loss: 0.0329 - mae: 0.1349
Epoch 29/30
1214/1214 [==============================] - 14s 12ms/step - loss: 0.0343 - mae: 0.1387
800/1214 [==================>...........] - ETA: 4s - loss: 0.0305 - mae: 0.1296Epoch 26/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0332 - mae: 0.1355
1015/1214 [========================>.....] - ETA: 2s - loss: 0.0316 - mae: 0.1325Epoch 29/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0334 - mae: 0.1358
Epoch 30/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0312 - mae: 0.1321
Epoch 30/30
1214/1214 [==============================] - 14s 11ms/step - loss: 0.0328 - mae: 0.1349
Epoch 30/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0331 - mae: 0.1365
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0329 - mae: 0.1366
606/1214 [=============>................] - ETA: 6s - loss: 0.0317 - mae: 0.1330Epoch 30/30
135/135 [==============================] - 1s 5ms/step loss: 0.0326 - mae: 0.1320
334/1214 [=======>......................] - ETA: 9s - loss: 0.0325 - mae: 0.1364[CV] END ..............keras_model__model__learning_rate=0.1; total time= 6.8min
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0317 - mae: 0.1328
Epoch 30/30
1214/1214 [==============================] - 13s 11ms/step - loss: 0.0334 - mae: 0.1369
Epoch 27/30
1214/1214 [==============================] - 13s 10ms/step - loss: 0.0325 - mae: 0.1336
486/1214 [===========>..................] - ETA: 7s - loss: 0.0324 - mae: 0.1330Epoch 30/30
1214/1214 [==============================] - 13s 10ms/step - loss: 0.0314 - mae: 0.1318
135/135 [==============================] - 1s 3ms/step loss: 0.0401 - mae: 0.1455
1142/1214 [===========================>..] - ETA: 0s - loss: 0.0324 - mae: 0.1352[CV] END ..............keras_model__model__learning_rate=0.1; total time= 6.8min
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0321 - mae: 0.1346
135/135 [==============================] - 0s 3ms/step loss: 0.0380 - mae: 0.143
500/1214 [===========>..................] - ETA: 6s - loss: 0.0328 - mae: 0.1361[CV] END ..............keras_model__model__learning_rate=0.1; total time= 6.8min
1214/1214 [==============================] - 12s 10ms/step - loss: 0.0323 - mae: 0.1333
135/135 [==============================] - 1s 3ms/step loss: 0.0333 - mae: 0.139
683/1214 [===============>..............] - ETA: 4s - loss: 0.0329 - mae: 0.1357[CV] END ..............keras_model__model__learning_rate=0.1; total time= 6.9min
1214/1214 [==============================] - 10s 9ms/step - loss: 0.0333 - mae: 0.1380
1214/1214 [==============================] - 10s 8ms/step - loss: 0.0323 - mae: 0.1336
135/135 [==============================] - 0s 2ms/step
971/1214 [======================>.......] - ETA: 1s - loss: 0.0317 - mae: 0.1334[CV] END ..............keras_model__model__learning_rate=0.1; total time= 6.8min
135/135 [==============================] - 0s 2ms/step loss: 0.0319 - mae: 0.134
1038/1214 [========================>.....] - ETA: 1s - loss: 0.0323 - mae: 0.1344[CV] END ..............keras_model__model__learning_rate=0.1; total time= 6.8min
1214/1214 [==============================] - 9s 7ms/step - loss: 0.0321 - mae: 0.1342
976/1214 [=======================>......] - ETA: 1s - loss: 0.0343 - mae: 0.1368Epoch 28/30
1214/1214 [==============================] - 7s 6ms/step - loss: 0.0325 - mae: 0.1336
135/135 [==============================] - 0s 1ms/step loss: 0.0344 - mae: 0.138
[CV] END ..............keras_model__model__learning_rate=0.1; total time= 6.8min
1214/1214 [==============================] - 4s 3ms/step - loss: 0.0333 - mae: 0.1365
Epoch 29/30
1214/1214 [==============================] - 4s 3ms/step - loss: 0.0344 - mae: 0.1390
Epoch 30/30
1214/1214 [==============================] - 4s 3ms/step - loss: 0.0347 - mae: 0.1391
135/135 [==============================] - 0s 1ms/step
[CV] END ..............keras_model__model__learning_rate=0.1; total time= 6.3min
Epoch 1/30
1349/1349 [==============================] - 5s 4ms/step - loss: 2.0211 - mae: 0.6295
Epoch 2/30
1349/1349 [==============================] - 5s 4ms/step - loss: 0.0478 - mae: 0.1427
Epoch 3/30
1349/1349 [==============================] - 5s 4ms/step - loss: 0.0261 - mae: 0.1140
Epoch 4/30
1349/1349 [==============================] - 5s 4ms/step - loss: 0.0224 - mae: 0.1071
Epoch 5/30
1349/1349 [==============================] - 5s 3ms/step - loss: 0.0207 - mae: 0.1031
Epoch 6/30
1349/1349 [==============================] - 5s 3ms/step - loss: 0.0199 - mae: 0.1015
Epoch 7/30
1349/1349 [==============================] - 5s 3ms/step - loss: 0.0188 - mae: 0.0984
Epoch 8/30
1349/1349 [==============================] - 5s 4ms/step - loss: 0.0181 - mae: 0.0963
Epoch 9/30
1349/1349 [==============================] - 5s 3ms/step - loss: 0.0176 - mae: 0.0951
Epoch 10/30
1349/1349 [==============================] - 5s 3ms/step - loss: 0.0174 - mae: 0.0947
Epoch 11/30
1349/1349 [==============================] - 6s 4ms/step - loss: 0.0169 - mae: 0.0929
Epoch 12/30
1349/1349 [==============================] - 5s 4ms/step - loss: 0.0166 - mae: 0.0922
Epoch 13/30
1349/1349 [==============================] - 5s 4ms/step - loss: 0.0166 - mae: 0.0925
Epoch 14/30
1349/1349 [==============================] - 5s 4ms/step - loss: 0.0159 - mae: 0.0902
Epoch 15/30
1349/1349 [==============================] - 5s 4ms/step - loss: 0.0163 - mae: 0.0918
Epoch 16/30
1349/1349 [==============================] - 5s 4ms/step - loss: 0.0157 - mae: 0.0893
Epoch 17/30
1349/1349 [==============================] - 5s 4ms/step - loss: 0.0157 - mae: 0.0902
Epoch 18/30
1349/1349 [==============================] - 5s 4ms/step - loss: 0.0155 - mae: 0.0888
Epoch 19/30
1349/1349 [==============================] - 5s 4ms/step - loss: 0.0153 - mae: 0.0885
Epoch 20/30
1349/1349 [==============================] - 5s 4ms/step - loss: 0.0154 - mae: 0.0889
Epoch 21/30
1349/1349 [==============================] - 5s 4ms/step - loss: 0.0153 - mae: 0.0888
Epoch 22/30
1349/1349 [==============================] - 5s 4ms/step - loss: 0.0149 - mae: 0.0879
Epoch 23/30
1349/1349 [==============================] - 7s 5ms/step - loss: 0.0148 - mae: 0.0872
Epoch 24/30
1349/1349 [==============================] - 5s 4ms/step - loss: 0.0146 - mae: 0.0864
Epoch 25/30
1349/1349 [==============================] - 5s 4ms/step - loss: 0.0149 - mae: 0.0875
Epoch 26/30
1349/1349 [==============================] - 5s 3ms/step - loss: 0.0145 - mae: 0.0864
Epoch 27/30
1349/1349 [==============================] - 5s 3ms/step - loss: 0.0147 - mae: 0.0873
Epoch 28/30
1349/1349 [==============================] - 5s 4ms/step - loss: 0.0143 - mae: 0.0854
Epoch 29/30
1349/1349 [==============================] - 5s 3ms/step - loss: 0.0142 - mae: 0.0855
Epoch 30/30
1349/1349 [==============================] - 5s 4ms/step - loss: 0.0143 - mae: 0.0860
/Users/fernando/miniconda3/envs/MLMIC25/lib/python3.10/site-packages/sklearn/compose/_column_transformer.py:1623: FutureWarning:
The format of the columns of the 'remainder' transformer in ColumnTransformer.transformers_ will change in version 1.7 to match the format of the other transformers.
At the moment the remainder columns are stored as indices (of type int). With the same ColumnTransformer configuration, in the future they will be stored as column names (of type str).
To use the new behavior now and suppress this warning, use ColumnTransformer(force_int_remainder_cols=False).
warnings.warn(
GridSearchCV(cv=KFold(n_splits=10, random_state=2025, shuffle=True),
estimator=Pipeline(steps=[('preproc',
Pipeline(steps=[('remove_outliers',
ColumnTransformer(force_int_remainder_cols=False,
remainder='passthrough',
transformers=[('outlier_remover',
FunctionTransformer(func=<function create_outlier_transformer.<locals>.<lambda> at 0x323733760>),
['carat',
'de...
PolynomialFeatures(include_bias=False),
['x'])])),
('name_cleanup_polyterms',
ColumnNameCleaner())])),
('keras_model',
KerasRegressor(batch_size=32, epochs=30, learning_rate=0.001, model=<function build_keras_model at 0x3251b9900>, optimizer='adam'))]),
n_jobs=-1,
param_grid={'keras_model__model__learning_rate': [0.0001, 0.001,
0.01, 0.1]},
scoring='neg_mean_squared_error', verbose=2) In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook. On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org. GridSearchCV?Documentation for GridSearchCV iFitted GridSearchCV(cv=KFold(n_splits=10, random_state=2025, shuffle=True),
estimator=Pipeline(steps=[('preproc',
Pipeline(steps=[('remove_outliers',
ColumnTransformer(force_int_remainder_cols=False,
remainder='passthrough',
transformers=[('outlier_remover',
FunctionTransformer(func=<function create_outlier_transformer.<locals>.<lambda> at 0x323733760>),
['carat',
'de...
PolynomialFeatures(include_bias=False),
['x'])])),
('name_cleanup_polyterms',
ColumnNameCleaner())])),
('keras_model',
KerasRegressor(batch_size=32, epochs=30, learning_rate=0.001, model=<function build_keras_model at 0x3251b9900>, optimizer='adam'))]),
n_jobs=-1,
param_grid={'keras_model__model__learning_rate': [0.0001, 0.001,
0.01, 0.1]},
scoring='neg_mean_squared_error', verbose=2) best_estimator_: Pipeline Pipeline(steps=[('preproc',
Pipeline(steps=[('remove_outliers',
ColumnTransformer(force_int_remainder_cols=False,
remainder='passthrough',
transformers=[('outlier_remover',
FunctionTransformer(func=<function create_outlier_transformer.<locals>.<lambda> at 0x323733760>),
['carat',
'depth',
'table',
'x', 'y',
'z',
'volume'])])),
('outliers_name_cleanup', ColumnNameCl...
('add_polyterms',
ColumnTransformer(remainder='passthrough',
transformers=[('poly_x',
PolynomialFeatures(include_bias=False),
['x'])])),
('name_cleanup_polyterms',
ColumnNameCleaner())])),
('keras_model',
KerasRegressor(batch_size=32, epochs=30, learning_rate=0.001, model=<function build_keras_model at 0x3251b9900>, model__learning_rate=0.001, optimizer='adam'))]) preproc: Pipeline?Documentation for preproc: Pipeline Pipeline(steps=[('remove_outliers',
ColumnTransformer(force_int_remainder_cols=False,
remainder='passthrough',
transformers=[('outlier_remover',
FunctionTransformer(func=<function create_outlier_transformer.<locals>.<lambda> at 0x323733760>),
['carat', 'depth', 'table',
'x', 'y', 'z',
'volume'])])),
('outliers_name_cleanup', ColumnNameCleaner()),
('imputer',
ColumnT...
ColumnTransformer(remainder='passthrough',
transformers=[('power_transformer',
PowerTransformer(),
['carat', 'depth', 'table',
'x'])])),
('name_cleanup_power_transform', ColumnNameCleaner()),
('add_polyterms',
ColumnTransformer(remainder='passthrough',
transformers=[('poly_x',
PolynomialFeatures(include_bias=False),
['x'])])),
('name_cleanup_polyterms', ColumnNameCleaner())]) remove_outliers: ColumnTransformer?Documentation for remove_outliers: ColumnTransformer ColumnTransformer(force_int_remainder_cols=False, remainder='passthrough',
transformers=[('outlier_remover',
FunctionTransformer(func=<function create_outlier_transformer.<locals>.<lambda> at 0x323733760>),
['carat', 'depth', 'table', 'x', 'y', 'z',
'volume'])]) imputer: ColumnTransformer?Documentation for imputer: ColumnTransformer ColumnTransformer(transformers=[('num_imputer',
SimpleImputer(strategy='median'),
['carat', 'depth', 'table', 'x', 'y', 'z',
'volume']),
('cat_imputer',
SimpleImputer(strategy='most_frequent'),
['cut', 'color', 'clarity'])]) remainder ['cut_Good', 'cut_Ideal', 'cut_Premium', 'cut_Very Good', 'color_E', 'color_F', 'color_G', 'color_H', 'color_I', 'color_J', 'clarity_IF', 'clarity_SI1', 'clarity_SI2', 'clarity_VS1', 'clarity_VS2', 'clarity_VVS1', 'clarity_VVS2'] remove_collinearity: ColumnTransformer?Documentation for remove_collinearity: ColumnTransformer ColumnTransformer(remainder='passthrough',
transformers=[('collinearRemover',
DropHighlyCorrelatedFeatures(threshold=0.92,
varOrder=['depth',
'table',
'volume',
'z',
'y',
'x',
'carat']),
['carat', 'depth', 'table', 'x', 'y', 'z',
'volume'])]) remainder ['cut_Good', 'cut_Ideal', 'cut_Premium', 'cut_Very Good', 'color_E', 'color_F', 'color_G', 'color_H', 'color_I', 'color_J', 'clarity_IF', 'clarity_SI1', 'clarity_SI2', 'clarity_VS1', 'clarity_VS2', 'clarity_VVS1', 'clarity_VVS2'] remainder ['cut_Good', 'cut_Ideal', 'cut_Premium', 'cut_Very Good', 'color_E', 'color_F', 'color_G', 'color_H', 'color_I', 'color_J', 'clarity_IF', 'clarity_SI1', 'clarity_SI2', 'clarity_VS1', 'clarity_VS2', 'clarity_VVS1', 'clarity_VVS2'] remainder ['carat', 'depth', 'table', 'cut_Good', 'cut_Ideal', 'cut_Premium', 'cut_Very Good', 'color_E', 'color_F', 'color_G', 'color_H', 'color_I', 'color_J', 'clarity_IF', 'clarity_SI1', 'clarity_SI2', 'clarity_VS1', 'clarity_VS2', 'clarity_VVS1', 'clarity_VVS2']
Hide the code
keras_gridCV.best_params_
{'keras_model__model__learning_rate': 0.001}
We have commented some codde in the grid search to hint at all the other aspects of the network architecture that you could explore. But as we said before, that is a subject that is best tackled in a dedicated course on deep learning using the proper tools of the trade.
References
Fitch, Frederic B. 1944. “McCulloch Warren s. And Pitts Walter. A Logical Calculus of the Ideas Immanent in Nervous Activity. Bulletin of Mathematical Biophysics, Vol. 5, Pp. 115–133.”
Géron, Aurélien. 2022.
Hands-on Machine Learning with Scikit-Learn, Keras, and TensorFlow . 3rd ed. Sebastopol, CA: O’Reilly Media.
https://www.oreilly.com/library/view/hands-on-machine-learning/9781098125967/ .
Glassner, Andrew. 2021. Deep Learning: A Visual Approach . No Starch Press.
Hecht-Nielsen, Robert. 1990. Neurocomputing . Addison-Wesley.
Hornik, Kurt, Maxwell Stinchcombe, and Halbert White. 1990.
“Universal Approximation of an Unknown Mapping and Its Derivatives Using Multilayer Feedforward Networks.” Neural Networks 3 (5): 551–60. https://doi.org/
https://doi.org/10.1016/0893-6080(90)90005-6 .
James, Gareth, Daniela Witten, Trevor Hastie, Robert Tibshirani, and Jonathan Taylor. 2023.
An Introduction to Statistical Learning with Applications in Python . Springer International Publishing.
https://doi.org/10.1007/978-3-031-38747-0 .
Kneusel, R. T. 2021.
Practical Deep Learning: A Python-Based Introduction . No Starch Press.
https://nostarch.com/practical-deep-learning-python .